exceeded the 80 characters length limit and was truncated.

flink任務在idea調試時沒有問題,但提交到standalone集羣上運行時,發現沒有數據。看日誌發現下面這段信息:

2020-06-04 21:05:05,897 WARN  org.apache.flink.metrics.MetricGroup                          - The operator name DataSource (at createInput(ExecutionEnvironment.java:576) (com.asn.re.warehouse.dws.source.DWS_ReceiptBillEntry_IPF)) exceeded the 80 characters length limit and was truncated.
2020-06-04 21:05:05,897 WARN  org.apache.flink.metrics.MetricGroup                          - The operator name DataSource (at createInput(ExecutionEnvironment.java:576) (com.asn.re.warehouse.dws.source.DWS_ReceiptBillEntry_IPF)) exceeded the 80 characters length limit and was truncated.
2020-06-04 21:05:05,902 INFO  org.apache.flink.runtime.taskmanager.Task                     - CHAIN DataSource (at createInput(ExecutionEnvironment.java:576) (com.asn.re.warehouse.dws.source.DWS_ReceiptBillEntry_IPF)) -> Filter (Filter at main(DWS_ReceiptBillEntry_Sum_total.java:30)) (1/2) (9493d2975899a4bbf1a40770ac085a39) switched from RUNNING to FINISHED.
2020-06-04 21:05:05,902 INFO  org.apache.flink.runtime.taskmanager.Task                     - Freeing task resources for CHAIN DataSource (at createInput(ExecutionEnvironment.java:576) (com.asn.re.warehouse.dws.source.DWS_ReceiptBillEntry_IPF)) -> Filter (Filter at main(DWS_ReceiptBillEntry_Sum_total.java:30)) (1/2) (9493d2975899a4bbf1a40770ac085a39).

關鍵信息就是這一句 exceeded the 80 characters length limit and was truncated。自定義的inputformat名稱過長,超過80個字節,被自動階段,然後處理從running轉成finished。。。

6月5日補充

這個異常是flink默認的指標監控中的一項,不會影響程序的運行。

這裏之所以配跑完任務但沒有數據,是因爲打包時沒有把flink依賴的一些包排除掉,scope設置成provided即可。

<dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-java_2.12</artifactId>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-api-java-bridge_2.12</artifactId>
            <scope>provided</scope>
        </dependency>
        <!-- if you want to run the Table API & SQL programs locally within your IDE, you must add one of the following set of modules-->
        <!-- Either... (for the old planner that was available before Flink 1.9) -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner_2.12</artifactId>
            <scope>provided</scope>
        </dependency>
        <!-- or.. (for the new Blink planner) -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner-blink_2.12</artifactId>
            <scope>provided</scope>
        </dependency>

        <!--Internally, parts of the table ecosystem are implemented in Scala. Therefore, please make sure to add the following dependency for both batch and streaming applications-->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_2.12</artifactId>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-common</artifactId>
            <scope>provided</scope>
        </dependency>

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章