Flink實戰—Flink SQL+Kafka在Streaming場景的demo

繼續Flink的實戰,這次實現的是Flink+Kafka,實現在streaming場景下的應用。全部代碼請關注GitHub

Flink版本是1.9.1,kafka版本是2.1.0,使用java8開發。

本例是Flink SQL在Streaming場景下的應用,目標是從kafka中讀取json串,串中包含id, site, proctime,計算5秒內的網站流量pv。

1. 數據準備

數據的json結構很簡單,包含id,site,proctime三個字段。可以寫個腳本不停的寫入kafka的topic,我這裏就簡單使用kafka-console-producer.sh往裏面粘貼數據了。

{"id": 1, "site": "www.baidu.com", "proctime": "2020-04-11 00:00:01"}
{"id": 2, "site": "www.bilibili.com/", "proctime": "2020-04-11 00:00:02"}
{"id": 3, "site": "www.baidu.com", "proctime": "2020-04-11 00:00:03"}
{"id": 4, "site": "www.baidu.com/", "proctime": "2020-04-11 00:00:05"}
{"id": 5, "site": "www.baidu.com", "proctime": "2020-04-11 00:00:06"}
{"id": 6, "site": "www.bilibili.com/", "proctime": "2020-04-11 00:00:07"}
{"id": 7, "site": "https://github.com/tygxy", "proctime": "2020-04-11 00:00:08"}
{"id": 8, "site": "www.bilibili.com/", "proctime": "2020-04-11 00:00:09"}
{"id": 9, "site": "www.baidu.com", "proctime": "2020-04-11 00:00:11"}
{"id": 10, "site": "www.bilibili.com/", "proctime": "2020-04-11 00:00:18"}

2. 創建工程

這裏直接使用上一篇Flink SQL in Batch創建的項目了,具體信息可參考Flink實戰—Flink SQL在Batch場景的Demo

唯一注意的是pox.xml裏添了一個處理json的依賴

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-json</artifactId>
  <version>${flink.version}</version>
</dependency>

3. 實現功能

創建SQLStreaming的JAVA類。

package com.cmbc.flink;
​
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.java.StreamTableEnvironment;
import org.apache.flink.table.descriptors.Json;
import org.apache.flink.table.descriptors.Kafka;
import org.apache.flink.table.descriptors.Schema;
​
import java.sql.Timestamp;
​
​
public class SQLStreaming {
    public static void main(String[] args) throws Exception {
​
        // set up execution environment
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
​
        // kafka source
        Kafka kafka = new Kafka()
                .version("0.10")
                .topic("flink-streaming")
                .property("bootstrap.servers", "localhost:9092")
                .property("zookeeper.connect", "localhost:2181");
        tableEnv.connect(kafka)
                .withFormat(
                        new Json().failOnMissingField(true).deriveSchema()
                )
                .withSchema(
                        new Schema()
                                .field("id", Types.INT)
                                .field("site", Types.STRING)
                                .field("proctime", Types.SQL_TIMESTAMP).proctime()
                )
                .inAppendMode()
                .registerTableSource("Data");
​
        // do sql
        String sql = "SELECT TUMBLE_END(proctime, INTERVAL '5' SECOND) as processtime," +
                "count(1) as pv, site " +
                "FROM Data " +
                "GROUP BY TUMBLE(proctime, INTERVAL '5' SECOND), site";
        Table table = tableEnv.sqlQuery(sql);
​
        // to sink
        tableEnv.toAppendStream(table, Info.class).print();
        tableEnv.execute("Flink SQL in Streaming");
    }
​
    public static class Info {
        public Timestamp processtime;
        public String site;
        public Long pv;
​
        public Info() {
        }
​
        public Info(Timestamp processtime, String site, Long pv) {
            this.processtime = processtime;
            this.pv = pv;
            this.site = site;
        }
​
        @Override
        public String toString() {
            return
                    "processtime=" + processtime +
                            ", site=" + site +
                            ", pv=" + pv +
                            "";
        }
    }
}

功能也比較簡單,簡單說一下:

  • 初始化flink env
  • 讀取kafka內容,配置基本信息並,映射schema,註冊成表
  • 消費數據,執行sql
  • 數據保存或輸出

4. 運行和結果

  • 啓動flink on local的模式 ,在flink的安裝路徑下找到腳本start-cluster.sh
  • 開啓zookeeper, sh zkServer start
  • 開啓kafka
sh kafka-server-start ../config/server.properties
  • 開啓kafka-console-producer.sh,開始塞數據
sh kafka-console-producer --broker-list localhost:9092 --topic flink-streaming
  • 啓動flink程序,查看結果


5. 參考

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章