flink 使用問題彙總

羅列一下bug,備註一下,後續不斷完善

  1. 註冊表時,請勿使用result

 tableEnv.registerTable("result_agg", talbe);

如上,如果你寫爲


 tableEnv.registerTable("result", talbe);

那麼會報以下錯誤


org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: SQL parse failed. Encountered "from result" at line 1, column 13.
Was expecting one of:
    <EOF> 
    "ORDER" ...
    "LIMIT" ...
    "OFFSET" ...
    "FETCH" ...
    "FROM" <IDENTIFIER> ...
    "FROM" <QUOTED_IDENTIFIER> ...
    "FROM" <BACK_QUOTED_IDENTIFIER> ...
    "FROM" <BRACKET_QUOTED_IDENTIFIER> ...
    "FROM" <UNICODE_QUOTED_IDENTIFIER> ...
    "FROM" "LATERAL" ...
    "FROM" "(" ...
    "FROM" "UNNEST" ...
    "FROM" "TABLE" ...
    "," ...
    "AS" ...
    <IDENTIFIER> ...
    <QUOTED_IDENTIFIER> ...
    <BACK_QUOTED_IDENTIFIER> ...
    <BRACKET_QUOTED_IDENTIFIER> ...
    <UNICODE_QUOTED_IDENTIFIER> ...
  1. 如果mysql 類型是tinyint 要轉以下,否則flink會報錯
     Caused by: java.lang.ClassCastException: java.lang.Boolean cannot be cast to java.lang.String
    at org.apache.flink.api.common.typeutils.base.StringSerializer.serialize(StringSerializer.java:33)
    at org.apache.flink.api.java.typeutils.runtime.RowSerializer.serialize(RowSerializer.java:161)

轉的方式很簡單比如


select project_fid, cast(project_info_type as CHAR) as type from table
  1. join的時候如果有一側爲map類型的數據(比如你使用了collect方法,類似於mysql的group_concat),回報空指針,類似於

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Error while applying rule FlinkLogicalJoinConverter, args [rel#56:LogicalJoin.NONE(left=rel#52:Subset#0.NONE,right=rel#55:Subset#2.NONE,condition==($0, $2),joinType=inner)]

這個bug目前還沒有修復,連接如下 https://issues.apache.org/jira/browse/FLINK-11433

只能想辦法繞過去,把map類型的數據,變成字符串,方法是自定義函數

public class MapToString extends ScalarFunction {

    public String eval(Map<String, Integer> map) {
        if(map==null || map.size()==0) {
            return "";
        }
        StringBuffer sb=new StringBuffer();
        for(Map.Entry<String, Integer> entity : map.entrySet()) {
            sb.append(entity.getKey()+",");
        }
        String result=sb.toString();
        return result.substring(0, result.length()-1);
      }
}

調用的時候使用

select id, mapToString(collect(type)) as type from table  group by id

當然你還需要註冊一下


tableEnv.registerFunction("mapToString", new MapToString());
  1. 類型轉化錯誤

最近總遇到類型轉化錯誤的提示,目前發現了兩個, 做個記錄

a 如果是tiny(1) 會自動轉爲 boolean, 除了上面的解決方案,更優雅的是修改mysql 的連接,加上參數 tinyInt1isBit=false, 注意大小寫

b 有時候mysql數據庫id字段明明是int,但flink卻認定爲long。 貌似以前mybatis也有此問題(https://blog.csdn.net/ahwsk/article/details/81975117)。
後來我又認真的看了一下表設計(別人的表)發現 勾選了“無符號” 這個選項,當我去掉這個選項,再次運行,居然不報錯了,看來是無符號,讓flink轉化錯誤的,無符號比有符號在範圍上多了一位, 多出的這一位,有可能已經超過了java中int 的範圍(java中int 都是有符號的),所以自動轉爲long型了。

  1. Could not find a suitable table factory for 'org.apache.flink.table.factories.StreamTableSourceFactory'

雖然fatjar已經有對應的類了,但是依然報錯,最後的解決辦法是在flink的lib目錄中再次加入相關的類,問題解決。

  1. cannot assign instance of org.apache.commons.collections.map.LinkedMap to field org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit of type org.apache.commons.collections.map.LinkedMap in instance of org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011

這個錯誤需要在flink-conf.yaml 加入 classloader.resolve-order: parent-first

  1. Caused by: java.util.concurrent.CompletionException: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka.tcp://flink@flink88:15265/user/taskmanager_0#66653408]] after [10000 ms]. Sender[null] sent message of type "org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation".

出現此錯誤,在flink的配置中增加

akka.ask.timeout: 120s
web.timeout: 120000

8.Exception while invoking ApplicationClientProtocolPBClientImpl.forceKillApplication over null. Retrying after sleeping for 30000ms

發生此錯誤是提交任務時出錯,使用 yarn logs -applicationId application_1565600987111 查看錯誤,找到原因,我遇到的原因是: akka.watch.heartbeat.pause 值小於 akka.watch.heartbeat.interval。修改後錯誤消失
或者kill 掉 CliFrontend 的進程

  1. Exception while invoking ApplicationClientProtocolPBClientImpl.forceKillApplication over null. Retrying after sleeping for 30000ms.
    java.io.IOException: The client is stopped
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
    at org.apache.hadoop.ipc.Client.call(Client.java:1381)
    at org.apache.hadoop.ipc.Client.call(Client.java:1345)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)

出現此異常還是內存的問題,檢查一下內存是否足夠,必須是free的不能是available, 如果發現後者很高, 請執行 以下兩條命令釋放內存

sync
echo 3 > /proc/sys/vm/drop_caches

10
Caused by: java.net.BindException: Could not start rest endpoint on any port in port range 8081
at org.apache.flink.runtime.rest.RestServerEndpoint.start(RestServerEndpoint.java:219)
at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:161)
... 9 more

此錯誤是說端口被佔用。查看源代碼:


Iterator<Integer> portsIterator;
            try {
                portsIterator = NetUtils.getPortRangeFromString(restBindPortRange);
            } catch (IllegalConfigurationException e) {
                throw e;
            } catch (Exception e) {
                throw new IllegalArgumentException("Invalid port range definition: " + restBindPortRange);
            }

對應的配置是 flink-conf.yaml中的rest.bind-port。
rest.bind-port不設置,則Rest Server默認綁定到rest.port端口(8081)。
rest.bind-port可以設置成列表格式如50100,50101,也可設置成範圍格式如50100-50200。推薦範圍格式,避免端口衝突。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章