Flink的Accumulator即累加器,与Saprk Accumulator 的应用场景差不多,都能很好地观察task在运行期间的数据变化
可以在Flink job任务中的算子函数中操作累加器,但是只能在任务执行结束之后才能获得累加器的最终结果。spark的累加器用法.
Flink中累加器的用法非常的简单:
1:创建累加器: val acc = new IntCounter();
2:注册累加器: getRuntimeContext().addAccumulator("accumulator", acc );
3:使用累加器: this.acc.add(1);
4:获取累加器的结果: myJobExecutionResult.getAccumulatorResult("accumulator")
下面看一个完整的demo:
package flink
import org.apache.flink.api.common.accumulators.IntCounter
import org.apache.flink.api.common.functions.RichMapFunction
import org.apache.flink.api.scala.ExecutionEnvironment
import org.apache.flink.api.scala._
import org.apache.flink.configuration.Configuration
/**
* Flink的累加器使用
*/
object flinkBatch {
def main(args: Array[String]): Unit = {
val env = ExecutionEnvironment.getExecutionEnvironment
val text = env.fromElements("Hello Jason What are you doing Hello world")
val counts = text
.flatMap(_.toLowerCase.split(" "))
.map(new RichMapFunction[String, String] {
//创建累加器
val acc = new IntCounter()
override def open(parameters: Configuration): Unit = {
super.open(parameters)
//注册累加器
getRuntimeContext.addAccumulator("accumulator", acc)
}
override def map(in: String): String = {
//使用累加器
this.acc.add(1)
in
}
}).map((_,1))
.groupBy(0)
.sum(1)
counts.writeAsText("d:/test.txt/").setParallelism(1)
val res = env.execute("Accumulator Test")
//获取累加器的结果
val num = res.getAccumulatorResult[Int]("accumulator")
println(num)
}
}
提交到集群后可以在ui上面看到我们注册的累加器的信息,如下图所示:
如果有写的不对的地方,欢迎大家指正,如果有什么疑问,可以加QQ群:340297350,谢谢