Flink的API总结:
0.首先创建执行环境(可以设置并行度)
StreamExecutionEnvironment executionEnvironment =
StreamExecutionEnvironment.getExecutionEnvironment();
executionEnvironment.setParallelism(1); //并行度设置
1.1 从集合中读取
1.2 从文件中读取
1.3 从socket端口中读取数据:
1.4 从Kafka中读取数据:
先引入maven依赖
1.5 自定义source,可以用来随机生成自测数据(需要的时候看https://www.bilibili.com/video/BV1MK411W7o4?p=16)
val stream5 = env.addSource(new MySensorSource())
2.1 可以使用slotSharingGroup进行单独共享组的配置,因为有些操作的计算量很大,没必要进行共享
使用 disableOperatorchaining()进行直接打散
==================================
3.1 Map进行数据清洗工作(这儿主要将原来的元素由单元组变为二元组用于计数)
SingleOutputStreamOperator<Tuple2<String, Integer>> result = str.map(new MapFunction<String, Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> map(String s) throws Exception {
return new Tuple2<>(s, 1);
}
});
3.2 flatMap进行打散(这儿的逻辑是按空格进行打散)
DataStream<String> result = str.flatMap(new FlatMapFunction<String, String>() {
@Override
public void flatMap(String value, Collector<String> collector) throws Exception {
String[] str1 = value.split(" ");
for (String str : str1) {
collector.collect(str);
}
}
});
3.3 Filter (在这里面主要实现去空的操作)
DataStream<String> result = str.filter(new FilterFunction<String>() {
@Override
public boolean filter(String value) throws Exception {
return !value.trim().equals("");
}
});
3.4 分组聚合(这儿按照字符串hashcode进行分组,然后根据将2号位的数字用聚合函数进行聚合)
SingleOutputStreamOperator<Tuple2<String, Integer>> result = result2.keyBy(0).sum(1);
3.5 reduce进行聚合
SingleOutputStreamOperator<Tuple2<String, Integer>> result = result3.reduce(new ReduceFunction<Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1, Tuple2<String, Integer> value2) throws Exception {
return new Tuple2<String, Integer>(value1.f0,value1.f1 + value2.f1);
}
});