一.轉換算子
1.1 map
從如下圖解可以看到,map是一對一的操作,對dataStream中的計算,一對一輸出
DataStream<Integer> mapStram = dataStream.map(new MapFunction<String, Integer>() {
public Integer map(String value) throws Exception {
return value.length();
}
});
1.2 flatMap
flatMap是一個輸入,多個輸出,例如通過"," 分隔符將
DataStream<String> flatMapStream = dataStream.flatMap(new FlatMapFunction<String, String>() {
public void flatMap(String value, Collector<String> out) throws Exception {
String[] fields = value.split(",");
for( String field: fields )
out.collect(field);
}
});
1.3 Filter
Filter可以理解爲SQL語句中的where子句,過濾數據用的
DataStream<Interger> filterStream = dataStream.filter(new FilterFunction<String>() {
public boolean filter(String value) throws Exception {
return value == 1;
}
});
二.代碼
數據準備:
sensor.txt
sensor_1 1547718199 35.8
sensor_6 1547718201, 15.4
sensor_7 1547718202, 6.7
sensor_10 1547718205 38.1
代碼:
package org.flink.transform;
/**
* @author 只是甲
* @date 2021-08-31
* @remark Flink 基礎Transform map、flatMap、filter
*/
import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;
public class TransformTest1_Base {
public static void main(String[] args) throws Exception{
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
// 從文件讀取數據
DataStream<String> inputStream = env.readTextFile("C:\\Users\\Administrator\\IdeaProjects\\FlinkStudy\\src\\main\\resources\\sensor.txt");
// 1. map,把String轉換成長度輸出
DataStream<Integer> mapStream = inputStream.map(new MapFunction<String, Integer>() {
@Override
public Integer map(String value) throws Exception {
return value.length();
}
});
// 2. flatmap,按逗號分字段
DataStream<String> flatMapStream = inputStream.flatMap(new FlatMapFunction<String, String>() {
@Override
public void flatMap(String value, Collector<String> out) throws Exception {
String[] fields = value.split(",");
for( String field: fields )
out.collect(field);
}
});
// 3. filter, 篩選sensor_1開頭的id對應的數據
DataStream<String> filterStream = inputStream.filter(new FilterFunction<String>() {
@Override
public boolean filter(String value) throws Exception {
return value.startsWith("sensor_1");
}
});
// 打印輸出
mapStream.print("map");
flatMapStream.print("flatMap");
filterStream.print("filter");
env.execute();
}
}
運行結果:
Flink是基於數據流的處理,所以是來一條處理一條,由於並行度是1所以3個算子計算一個就輸出一個。
這裏,我把並行度改爲2,再來看輸出,就可以看到輸出不一樣了。