本文是《Flink的DataSource三部曲》系列的第二篇,上一篇《Flink的DataSource三部曲之一:直接API》學習了StreamExecutionEnvironment的API創建DataSource,今天要練習的是Flink內置的connector,即下圖的紅框位置,這些connector可以通過StreamExecutionEnvironment的addSource方法使用:
今天的實戰選擇Kafka作爲數據源來操作,先嚐試接收和處理String型的消息,再接收JSON類型的消息,將JSON反序列化成bean實例;
Flink的DataSource三部曲文章鏈接
源碼下載
如果您不想寫代碼,整個系列的源碼可在GitHub下載到,地址和鏈接信息如下表所示(https://github.com/zq2599/blog_demos):
名稱 | 鏈接 | 備註 |
---|---|---|
項目主頁 | https://github.com/zq2599/blog_demos | 該項目在GitHub上的主頁 |
git倉庫地址(https) | https://github.com/zq2599/blog_demos.git | 該項目源碼的倉庫地址,https協議 |
git倉庫地址(ssh) | [email protected]:zq2599/blog_demos.git | 該項目源碼的倉庫地址,ssh協議 |
這個git項目中有多個文件夾,本章的應用在flinkdatasourcedemo文件夾下,如下圖紅框所示:
環境和版本
本次實戰的環境和版本如下:
- JDK:1.8.0_211
- Flink:1.9.2
- Maven:3.6.0
- 操作系統:macOS Catalina 10.15.3 (MacBook Pro 13-inch, 2018)
- IDEA:2018.3.5 (Ultimate Edition)
- Kafka:2.4.0
- Zookeeper:3.5.5
請確保上述內容都已經準備就緒,才能繼續後面的實戰;
Flink與Kafka版本匹配
- Flink官方對匹配Kafka版本做了詳細說明,地址是:https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html
- 要重點關注的是官方提到的通用版(universal Kafka connector ),這是從Flink1.7開始推出的,對於Kafka1.0.0或者更高版本都可以使用:
- 下圖紅框中是我的工程中要依賴的庫,藍框中是連接Kafka用到的類,讀者您可以根據自己的Kafka版本在表格中找到適合的庫和類:
實戰字符串消息處理
- 在kafka上創建名爲test001的topic,參考命令:
./kafka-topics.sh \
--create \
--zookeeper 192.168.50.43:2181 \
--replication-factor 1 \
--partitions 2 \
--topic test001
- 繼續使用上一章創建的flinkdatasourcedemo工程,打開pom.xml文件增加以下依賴:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.11</artifactId>
<version>1.10.0</version>
</dependency>
- 新增類Kafka240String.java,作用是連接broker,對收到的字符串消息做WordCount操作:
package com.bolingcavalry.connector;
import com.bolingcavalry.Splitter;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import java.util.Properties;
import static com.sun.tools.doclint.Entity.para;
public class Kafka240String {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//設置並行度
env.setParallelism(2);
Properties properties = new Properties();
//broker地址
properties.setProperty("bootstrap.servers", "192.168.50.43:9092");
//zookeeper地址
properties.setProperty("zookeeper.connect", "192.168.50.43:2181");
//消費者的groupId
properties.setProperty("group.id", "flink-connector");
//實例化Consumer類
FlinkKafkaConsumer<String> flinkKafkaConsumer = new FlinkKafkaConsumer<>(
"test001",
new SimpleStringSchema(),
properties
);
//指定從最新位置開始消費,相當於放棄歷史消息
flinkKafkaConsumer.setStartFromLatest();
//通過addSource方法得到DataSource
DataStream<String> dataStream = env.addSource(flinkKafkaConsumer);
//從kafka取得字符串消息後,分割成單詞,統計數量,窗口是5秒
dataStream
.flatMap(new Splitter())
.keyBy(0)
.timeWindow(Time.seconds(5))
.sum(1)
.print();
env.execute("Connector DataSource demo : kafka");
}
}
- 確保kafka的topic已經創建,將Kafka240運行起來,可見消費消息並進行單詞統計的功能是正常的:
- 接收kafka字符串消息的實戰已經完成,接下來試試JSON格式的消息;
實戰JSON消息處理
- 接下來要接受的JSON格式消息,可以被反序列化成bean實例,會用到JSON庫,我選擇的是gson;
- 在pom.xml增加gson依賴:
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.5</version>
</dependency>
- 增加類Student.java,這是個普通的Bean,只有id和name兩個字段:
package com.bolingcavalry;
public class Student {
private int id;
private String name;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
- 增加類StudentSchema.java,該類是DeserializationSchema接口的實現,將JSON反序列化成Student實例時用到:
ackage com.bolingcavalry.connector;
import com.bolingcavalry.Student;
import com.google.gson.Gson;
import org.apache.flink.api.common.serialization.DeserializationSchema;
import org.apache.flink.api.common.serialization.SerializationSchema;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import java.io.IOException;
public class StudentSchema implements DeserializationSchema<Student>, SerializationSchema<Student> {
private static final Gson gson = new Gson();
/**
* 反序列化,將byte數組轉成Student實例
* @param bytes
* @return
* @throws IOException
*/
@Override
public Student deserialize(byte[] bytes) throws IOException {
return gson.fromJson(new String(bytes), Student.class);
}
@Override
public boolean isEndOfStream(Student student) {
return false;
}
/**
* 序列化,將Student實例轉成byte數組
* @param student
* @return
*/
@Override
public byte[] serialize(Student student) {
return new byte[0];
}
@Override
public TypeInformation<Student> getProducedType() {
return TypeInformation.of(Student.class);
}
}
- 新增類Kafka240Bean.java,作用是連接broker,對收到的JSON消息轉成Student實例,統計每個名字出現的數量,窗口依舊是5秒:
package com.bolingcavalry.connector;
import com.bolingcavalry.Splitter;
import com.bolingcavalry.Student;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import java.util.Properties;
public class Kafka240Bean {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//設置並行度
env.setParallelism(2);
Properties properties = new Properties();
//broker地址
properties.setProperty("bootstrap.servers", "192.168.50.43:9092");
//zookeeper地址
properties.setProperty("zookeeper.connect", "192.168.50.43:2181");
//消費者的groupId
properties.setProperty("group.id", "flink-connector");
//實例化Consumer類
FlinkKafkaConsumer<Student> flinkKafkaConsumer = new FlinkKafkaConsumer<>(
"test001",
new StudentSchema(),
properties
);
//指定從最新位置開始消費,相當於放棄歷史消息
flinkKafkaConsumer.setStartFromLatest();
//通過addSource方法得到DataSource
DataStream<Student> dataStream = env.addSource(flinkKafkaConsumer);
//從kafka取得的JSON被反序列化成Student實例,統計每個name的數量,窗口是5秒
dataStream.map(new MapFunction<Student, Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> map(Student student) throws Exception {
return new Tuple2<>(student.getName(), 1);
}
})
.keyBy(0)
.timeWindow(Time.seconds(5))
.sum(1)
.print();
env.execute("Connector DataSource demo : kafka bean");
}
}
- 在測試的時候,要向kafka發送JSON格式字符串,flink這邊就會給統計出每個name的數量:
至此,內置connector的實戰就完成了,接下來的章節,我們將要一起實戰自定義DataSource;