Flink擴展API

一.簡介

爲了在Scala和Java API之間保持相當程度的一致性,用於批處理和流傳輸的標準API省略了一些允許在Scala中進行高水平表達的功能。
如果想享受完整的Scala體驗,則可以選擇加入通過隱式轉換來增強Scala API的擴展。
要使用所有可用的擴展,只需導入相應的擴展組件即可:
1.DataSet API

import org.apache.flink.api.scala.extensions._

2.DataStream API

import org.apache.flink.streaming.api.scala.extensions._

另外,也可以根據需要導入單個擴展。

二.擴展模式匹配

通常,DataSet和DataStream API都不接受匿名模式匹配函數來匹配元組,案例類或集合,如下所示:

val data: DataSet[(Int, String, Double)] = // [...]
data.map {
  case (id, name, temperature) => // [...]
  // The previous line causes the following compilation error:
  // "The argument types of an anonymous function must be fully known. (SLS 8.5)"
}

此擴展在DataSet和DataStream Scala API中引入了新方法,這些新方法在擴展API中具有一對一的對應關係。這些擴展方法支持匿名模式匹配功能。
1.DataSet API
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
DataSet要專門使用此擴展,可以添加以下內容:

import org.apache.flink.api.scala.extensions.acceptPartialFunctions

2.DataStream API
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
DataStream要專門使用此擴展,可以添加以下內容:

import org.apache.flink.streaming.api.scala.extensions.acceptPartialFunctions

三.代碼案例

以下代碼片段顯示瞭如何一起使用這些擴展方法(與DataSet API一起使用)的示例:

package cn.extensions

import org.apache.flink.api.scala._
import org.apache.flink.api.scala.ExecutionEnvironment

/**
  * Created by Administrator on 2020/5/29.
  */
case class Person(x: String, y: Int)
object Match {
  def main(args: Array[String]): Unit = {
    // 設置execution執行環境
    val env = ExecutionEnvironment.getExecutionEnvironment

    val text = "Apache Flink apache spark apache solr hbase hive flink kafka redis tachyon redis"
    val persons = text.toLowerCase.split(" ").map(row => Person(row, 1))
    
    import org.apache.flink.api.scala.extensions._
    val ds = env.fromCollection(persons)
    val result = ds.filterWith {
      case Person(x, y) => y > 0
    }.groupingBy{
      case Person(x, _) => x
    }.sum("y")

    result.first(5).print()
  }
}

異常報錯信息:

Exception in thread "main" java.lang.UnsupportedOperationException: Aggregate does not support grouping with KeySelector functions, yet.
	at org.apache.flink.api.scala.operators.ScalaAggregateOperator.translateToDataFlow(ScalaAggregateOperator.java:220)
	at org.apache.flink.api.scala.operators.ScalaAggregateOperator.translateToDataFlow(ScalaAggregateOperator.java:55)
	at org.apache.flink.api.java.operators.OperatorTranslation.translateSingleInputOperator(OperatorTranslation.java:148)
	at org.apache.flink.api.java.operators.OperatorTranslation.translate(OperatorTranslation.java:102)
	at org.apache.flink.api.java.operators.OperatorTranslation.translateSingleInputOperator(OperatorTranslation.java:146)
	at org.apache.flink.api.java.operators.OperatorTranslation.translate(OperatorTranslation.java:102)
	at org.apache.flink.api.java.operators.OperatorTranslation.translate(OperatorTranslation.java:63)
	at org.apache.flink.api.java.operators.OperatorTranslation.translateToPlan(OperatorTranslation.java:52)
	at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:955)
	at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:922)
	at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:85)
	at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:816)
	at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
	at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
	at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1726)
	at cn.extensions.Match$.main(Match.scala:29)
	at cn.extensions.Match.main(Match.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

可知是使用groupingWith導致的,意思是不支持使用該擴展API,換回原來的groupWith即可:
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章