exceeded the 80 characters length limit and was truncated.

flink任务在idea调试时没有问题,但提交到standalone集群上运行时,发现没有数据。看日志发现下面这段信息:

2020-06-04 21:05:05,897 WARN  org.apache.flink.metrics.MetricGroup                          - The operator name DataSource (at createInput(ExecutionEnvironment.java:576) (com.asn.re.warehouse.dws.source.DWS_ReceiptBillEntry_IPF)) exceeded the 80 characters length limit and was truncated.
2020-06-04 21:05:05,897 WARN  org.apache.flink.metrics.MetricGroup                          - The operator name DataSource (at createInput(ExecutionEnvironment.java:576) (com.asn.re.warehouse.dws.source.DWS_ReceiptBillEntry_IPF)) exceeded the 80 characters length limit and was truncated.
2020-06-04 21:05:05,902 INFO  org.apache.flink.runtime.taskmanager.Task                     - CHAIN DataSource (at createInput(ExecutionEnvironment.java:576) (com.asn.re.warehouse.dws.source.DWS_ReceiptBillEntry_IPF)) -> Filter (Filter at main(DWS_ReceiptBillEntry_Sum_total.java:30)) (1/2) (9493d2975899a4bbf1a40770ac085a39) switched from RUNNING to FINISHED.
2020-06-04 21:05:05,902 INFO  org.apache.flink.runtime.taskmanager.Task                     - Freeing task resources for CHAIN DataSource (at createInput(ExecutionEnvironment.java:576) (com.asn.re.warehouse.dws.source.DWS_ReceiptBillEntry_IPF)) -> Filter (Filter at main(DWS_ReceiptBillEntry_Sum_total.java:30)) (1/2) (9493d2975899a4bbf1a40770ac085a39).

关键信息就是这一句 exceeded the 80 characters length limit and was truncated。自定义的inputformat名称过长,超过80个字节,被自动阶段,然后处理从running转成finished。。。

6月5日补充

这个异常是flink默认的指标监控中的一项,不会影响程序的运行。

这里之所以配跑完任务但没有数据,是因为打包时没有把flink依赖的一些包排除掉,scope设置成provided即可。

<dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-java_2.12</artifactId>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-api-java-bridge_2.12</artifactId>
            <scope>provided</scope>
        </dependency>
        <!-- if you want to run the Table API & SQL programs locally within your IDE, you must add one of the following set of modules-->
        <!-- Either... (for the old planner that was available before Flink 1.9) -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner_2.12</artifactId>
            <scope>provided</scope>
        </dependency>
        <!-- or.. (for the new Blink planner) -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner-blink_2.12</artifactId>
            <scope>provided</scope>
        </dependency>

        <!--Internally, parts of the table ecosystem are implemented in Scala. Therefore, please make sure to add the following dependency for both batch and streaming applications-->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_2.12</artifactId>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-common</artifactId>
            <scope>provided</scope>
        </dependency>

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章