SQL實現Structured Streaming

需要的配置只有一個sql文件

1.實現socket輸入 console輸出

配置:

CREATE TABLE SocketTable(
    word String,
    valuecount int
)WITH(
    type='socket',
    host='hadoop-sh1-core1',
    port='9998',
    delimiter=' '
);

create SINK console(
)WITH(
    type='console',
    outputmode='complete'
);

insert into console select word,count(*) from SocketTable group by word;

上面語句,首先創建一個table,它的前半部分是字段和類型,後半是type爲socket的數據源,分隔符號是空格符(默認是逗號),後續中會根據create的名字創建一個同名的streaming table,schema是配置的字段

然後創建sink——輸出表,將console定義爲一張表,type是console,outputmode爲complete(默認也是)

語句,首先是一個insert into(一定要寫) ,插入表就是sink表,後面則是進行處理的數據的sql,這個例子是select word,count(valuecount) from SocketTable group by word,這樣,數據就能用Structured Streaming默認的流式的方式從socket到console

輸入: 
a 2
a 2
輸出:
Batch: 0
-------------------------------------------
+----+--------+
|WORD|count(1)|
+----+--------+
+----+--------+

-------------------------------------------
Batch: 1
-------------------------------------------
+----+--------+
|WORD|count(1)|
+----+--------+
|a   |4       |
+----+--------+

2.實現kafka輸入 console輸出

CREATE TABLE kafkaTable(
    word string,
    wordcount int
)WITH(
    type='kafka',
    kafka.bootstrap.servers='dfttshowkafka001:9092',
    subscribe='test',
    group='test'
);

create SINK consoleOut(
)WITH(
    type='console',
    outputmode='complete',
    process='2s'
);

insert into consoleOut select word,count(wordcount) from kafkaTable group by word;

上面語句和前面一樣,consoleOut配置中多了一個process='2s',意思是,控制檯2秒輸出一次

3.實現csv輸入 console輸出

CREATE TABLE csvTable(
    name string,
    age int
)WITH(
    type='csv',
    delimiter=';',
    path='F:\E\wordspace\sqlstream\filepath'
);

create SINK console(
)WITH(
    type='console',
    outputmode='complete',
);

insert into console select name,sum(age) from csvTable group by name;


輸入的csv文件裏的數據是:

zhang;23
wang;24
li;25
zhang;56

輸出是:

root
 |-- NAME: string (nullable = true)
 |-- AGE: integer (nullable = true)

-------------------------------------------
Batch: 0
-------------------------------------------
+-----+--------+
|NAME |sum(AGE)|
+-----+--------+
|zhang|79      |
|wang |24      |
|li   |25      |
+-----+--------+

4.實現socket輸入 console輸出,添加processtime的窗口函數

CREATE TABLE SocketTable(
    word String
)WITH(
    type='socket',
    host='hadoop-sh1-core1',
    processwindow='10 seconds,5 seconds',
    watermark='10 seconds',
    port='9998'
);

create SINK console(
)WITH(
    type='console',
    outputmode='complete'
);

insert into console select processwindow,word,count(*) from SocketTable group by processwindow,word;

上面socket中多了兩個參數,processwindow和watermark,processwindow其實就和sparkstreaming的流式處理差不多,前面是window,後一個是slide,寫一個或者兩個一致都是翻轉窗口。

watermark是一個延遲,就是允許你的數據遲到多久,這個,貌似在processtime裏沒啥意義。

sql語句中,processwindow其實包含兩個值,window的起始和結束,我們看一下結果

-------------------------------------------
Batch: 0
-------------------------------------------
+-------------+----+--------+
|PROCESSWINDOW|WORD|count(1)|
+-------------+----+--------+
+-------------+----+--------+

-------------------------------------------
Batch: 1
-------------------------------------------
+------------------------------------------+----+--------+
|PROCESSWINDOW                             |WORD|count(1)|
+------------------------------------------+----+--------+
|[2018-12-11 19:17:00, 2018-12-11 19:17:10]|c   |1       |
|[2018-12-11 19:17:00, 2018-12-11 19:17:10]|a   |3       |
+------------------------------------------+----+--------+

-------------------------------------------
Batch: 2
-------------------------------------------
+------------------------------------------+----+--------+
|PROCESSWINDOW                             |WORD|count(1)|
+------------------------------------------+----+--------+
|[2018-12-11 19:17:00, 2018-12-11 19:17:10]|c   |2       |
|[2018-12-11 19:17:00, 2018-12-11 19:17:10]|a   |4       |
+------------------------------------------+----+--------+

sql中select部分也可以不加processwindow則去掉PROCESSWINDOW這個參數,但是group部分要加上去,這樣才能做到根據窗口分組數據

4.實現socket輸入 console輸出,添加eventtime的窗口函數

eventtime和processtime的區別主要是,eventtime是根據事件事件來處理數據的,process則是來一條處理一條

CREATE TABLE SocketTable(
    timestamp Timestamp,
    word String
)WITH(
    type='socket',
    host='hadoop-sh1-core1',
    eventfield='timestamp',
    eventwindow='10 seconds,5 seconds',
    watermark='10 seconds',
    port='9998'
);

create SINK console(
)WITH(
    type='console',
    outputmode='complete'
);

insert into console select eventwindow,word,count(*) from SocketTable group by eventwindow,word;

eventtime——>根據事件事件生成,你的數據中肯定要有一個字段是代表時間的,上面的例子中代表時間字段的就是timestamp字段,類型是Timestamp

再下半部分的配置中有個eventfield的配置,就是指定前面的field中哪一個用來作爲事件時間的那個時間

eventwindow和processtime的意思差不多名字不一樣而已

watermark就是允許事件延遲的時間了,因爲根據事件時間處理,肯定會存在先來後到,watermark設置爲10 seconds,就是允許你的record的時間延遲10秒,後面,超過10秒的數據,再遲來的話,就會被丟棄。

運行過程中打印的schema
root
 |-- TIMESTAMP: timestamp (nullable = true)
 |-- WORD: string (nullable = true)
 |-- eventwindow: struct (nullable = true)
 |    |-- start: timestamp (nullable = true)
 |    |-- end: timestamp (nullable = true)
 
輸入數據
2018-12-07 16:36:12,a
2018-12-07 16:36:22,a
2018-12-07 16:36:32,b
2018-12-07 16:36:42,a
2018-12-07 16:36:52,a

輸出結果
Batch: 0
-------------------------------------------
+-----------+----+--------+
|EVENTWINDOW|WORD|count(1)|
+-----------+----+--------+
+-----------+----+--------+

-------------------------------------------
Batch: 1
-------------------------------------------
+------------------------------------------+----+--------+
|EVENTWINDOW                               |WORD|count(1)|
+------------------------------------------+----+--------+
|[2018-12-07 16:36:05, 2018-12-07 16:36:15]|a   |1       |
|[2018-12-07 16:36:10, 2018-12-07 16:36:20]|a   |1       |
+------------------------------------------+----+--------+

-------------------------------------------
Batch: 2
-------------------------------------------
+------------------------------------------+----+--------+
|EVENTWINDOW                               |WORD|count(1)|
+------------------------------------------+----+--------+
|[2018-12-07 16:36:30, 2018-12-07 16:36:40]|b   |1       |
|[2018-12-07 16:36:15, 2018-12-07 16:36:25]|a   |1       |
|[2018-12-07 16:36:45, 2018-12-07 16:36:55]|a   |1       |
|[2018-12-07 16:36:40, 2018-12-07 16:36:50]|a   |1       |
|[2018-12-07 16:36:20, 2018-12-07 16:36:30]|a   |1       |
|[2018-12-07 16:36:50, 2018-12-07 16:37:00]|a   |1       |
|[2018-12-07 16:36:25, 2018-12-07 16:36:35]|b   |1       |
|[2018-12-07 16:36:05, 2018-12-07 16:36:15]|a   |1       |
|[2018-12-07 16:36:10, 2018-12-07 16:36:20]|a   |1       |
|[2018-12-07 16:36:35, 2018-12-07 16:36:45]|a   |1       |
+------------------------------------------+----+--------+

5.改變sql語句而不用重啓項目實現更新(待實現)

6.配置中加入spark的配置參數實現調優(待實現)

7.自定義UDF函數(待實現)

(喜歡,就star一下吧)

github地址:https://github.com/peopleindreamdontsleep/StructuredStreamingInSQL

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章