代码GitHub:https://github.com/SmallScorpion/flink-tutorial.git
map
val streamMap = stream.map { x => x * 2 }
flatMap
flatMap的函数签名:def flatMap[A,B](as: List[A])(f: A ⇒ List[B]): List[B]
例如: flatMap(List(1,2,3))(i ⇒ List(i,i))
结果是List(1,1,2,2,3,3),
而List(“a b”, “c d”).flatMap(line ⇒ line.split(" "))
结果是List(a, b, c, d)。
val streamFlatMap = stream.flatMap{
x => x.split(" ")
}
Filter
val streamFilter = stream.filter{
x => x == 1
}
KeyBy
DataStream → KeyedStream:逻辑地将一个流拆分成不相交的分区,每个分区包含具有相同key的元素,在内部以hash的形式实现的。
这些算子可以针对KeyedStream的每一个支流做聚合:
- sum()
- min()
- max()
- minBy()
- maxBy()
// 取以ID为组最低的温度
val keyByDStream: DataStream[SensorReading] = dataDstream.keyBy("id").minBy("temperature")
Reduce
KeyedStream → DataStream:一个分组数据流的聚合操作,合并当前的元素和上次聚合的结果,产生一个新的值,返回的流中包含每一次聚合的结果,而不是只返回最后一次聚合的最终结果。
// 3. 复杂聚合操作,reduce,得到当前id最小的温度值,以及最新的时间戳+1
val reduceStream: DataStream[SensorReading] = dataDstream
.keyBy("id")
.reduce( (curState, newData) =>
// curState是之前数据 newData是现在数据
SensorReading( curState.id, newData.timestamp + 1, curState.temperature.min(newData.temperature)) )
split/select
DataStream → SplitStream:(split)根据某些特征把一个DataStream拆分成两个或者多个DataStream。(实际上还是一个流,只是给不同的数据打上标记)
SplitStream→DataStream:(select)从一个SplitStream中获取一个或者多个DataStream。
需求:传感器数据按照温度高低(以30度为界),拆分成两个流。
import com.atguigu.bean.SensorReading
import org.apache.flink.streaming.api.scala._
/**
* 分流操作,split/select,以30度为界划分高低温流
*/
object SplitAndSelectTransform {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
val inputDStream: DataStream[String] = env.readTextFile("D:\\MyWork\\WorkSpaceIDEA\\flink-tutorial\\src\\main\\resources\\SensorReading.txt")
val dataDstream: DataStream[SensorReading] = inputDStream.map(
data => {
val dataArray: Array[String] = data.split(",")
SensorReading(dataArray(0), dataArray(1).toLong, dataArray(2).toDouble)
})
// 打上标记
val splitStream: SplitStream[SensorReading] = dataDstream.split(
data => {
if (data.temperature >= 30)
Seq("high")
else
Seq("low")
}
)
// 根据标记将SplitStream又转换成DataStream
val highSensorDStream: DataStream[SensorReading] = splitStream.select("high")
val lowSensorDStream: DataStream[SensorReading] = splitStream.select("low")
val allSensorDStream: DataStream[SensorReading] = splitStream.select("high", "low")
highSensorDStream.print("high")
lowSensorDStream.print("low")
allSensorDStream.print("all")
env.execute("map test job")
}
}
Connect和 CoMap
DataStream,DataStream → ConnectedStreams:(Connect)连接两个保持他们类型的数据流,两个数据流被Connect之后,只是被放在了一个同一个流中,内部依然保持各自的数据和形式不发生任何变化,两个流相互独立。
ConnectedStreams → DataStream:(CoMap,CoFlatMap)作用于ConnectedStreams上,功能与map和flatMap一样,对ConnectedStreams中的每一个Stream分别进行map和flatMap处理。
import com.atguigu.bean.SensorReading
import org.apache.flink.streaming.api.scala._
/**
* 合流操作,connect/comap
*/
object ConnectAndCoMapTransform {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
val inputDStream: DataStream[String] = env.readTextFile("D:\\MyWork\\WorkSpaceIDEA\\flink-tutorial\\src\\main\\resources\\SensorReading.txt")
val dataDstream: DataStream[SensorReading] = inputDStream.map(
data => {
val dataArray: Array[String] = data.split(",")
SensorReading(dataArray(0), dataArray(1).toLong, dataArray(2).toDouble)
})
// 打上标记
val splitStream: SplitStream[SensorReading] = dataDstream.split(
data => {
if (data.temperature >= 30)
Seq("high")
else
Seq("low")
}
)
// 根据标记将SplitStream又转换成DataStream
val highSensorDStream: DataStream[SensorReading] = splitStream.select("high")
val lowSensorDStream: DataStream[SensorReading] = splitStream.select("low")
// 为了验证connect可将两个不相同参数的流进行合并,将其中一条流进行格式转换成二元组形式
val highWarningDStream: DataStream[(String, Double)] = highSensorDStream.map(
data => (data.id, data.temperature)
)
// 将两条流进行连接
val connectedStreams: ConnectedStreams[(String,Double),SensorReading] = highWarningDStream
.connect(lowSensorDStream)
// 将两条流的数据分别处理合为一条流
val coMapDStream: DataStream[(String, Double, String)] = connectedStreams.map(
// highWarningData是一个元组类型(String,Double)
highWarningData => (highWarningData._1, highWarningData._2, "Wraning"),
// 本身是一个SensorReading
lowTempData => (lowTempData.id, lowTempData.temperature, "normal")
)
coMapDStream.print("coMap")
env.execute("transform test job")
}
}
Union
DataStream → DataStream:(Union)对两个或者两个以上的DataStream进行union操作,产生一个包含所有DataStream元素的新DataStream。(可以同时合并多条流)
// 必须为同类型流
val unionDStream: DataStream[SensorReading] = highSensorDStream.union(lowSensorDStream,allSensorDStream)
Connect与 Union 区别
1. Union之前两个流的类型必须是一样,Connect可以不一样,在之后的coMap中再去调整成为一样的。
2. Connect只能操作两个流,Union可以操作多个。