flink table 使用Kafka Connector處理嵌套json

   使用flink table api 連接kafka 處理json類型數據,單層json處理比較簡單,官方或網上都有很多例子,處理嵌套的json數據沒什麼介紹。處理嵌套json數據主要是schema定義。

     

  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        EnvironmentSettings bsSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env,bsSettings);
   
        tableEnv.connect(new Kafka()
                .version("universal")
                .topic("w001")
                .property("zookeeper.connect", "192.168.0.160:3181")
                .property("bootstrap.servers", "192.168.0.160:9092,192.168.0.161:9092,192.168.0.162:9092")
                .property("group.id", "w01")
                .startFromLatest()
               )
                .withFormat(new Json().deriveSchema())  
                .withSchema(new Schema().field("id", Types.BIG_DEC).field("name", Types.STRING).field("timestamp", Types.SQL_TIMESTAMP)
                        .field("nested",Types.ROW_NAMED(new String[]{"booleanField","decimalField"},new TypeInformation[]{
                        Types.BOOLEAN,
                        Types.BIG_DEC
                })))
                .inAppendMode()
                .registerTableSource("test");
        Table query = tableEnv.sqlQuery("select name,nested.booleanField from test");
        tableEnv.toAppendStream(query, Row.class).print();
        tableEnv.execute("streaming");

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章