基於Calcite解析Flink SQL列級數據血緣

數據血緣

數據血緣(data lineage)是數據治理(data governance)的重要組成部分,也是元數據管理、數據質量管理的有力工具。通俗地講,數據血緣就是數據在產生、加工、流轉到最終消費過程中形成的有層次的、可溯源的聯繫。成熟的數據血緣系統可以幫助開發者快速定位問題,以及追蹤數據的更改,確定上下游的影響等等。

在數據倉庫的場景下,數據的載體是數據庫中的表和列(字段),相應地,數據血緣根據粒度也可以分爲較粗的表級血緣和較細的列(字段)級血緣。離線數倉的數據血緣提取已經有了成熟的方法,如利用Hive提供的LineageLogger與Execution Hooks機制。本文就來簡要介紹一種在實時數倉中基於Calcite解析Flink SQL列級血緣的方法,在此之前,先用幾句話聊聊Calcite的關係式元數據體系。

Calcite關係式元數據

在Calcite內部,庫表元數據由Catalog來處理,關係式元數據纔會被冠以[Rel]Metadata的名稱。關係式元數據與RelNode對應,以下是與其相關的Calcite組件:

  • RelMetadataQuery:爲關係式元數據提供統一的訪問接口;
  • RelMetadataProvider:爲RelMetadataQuery各接口提供實現的中間層;
  • MetadataFactory:生產並維護RelMetadataProvider的工廠;
  • MetadataHandler:處理關係式元數據的具體實現邏輯,全部位於org.apache.calcite.rel.metadata包下,且類名均以RelMd作爲前綴。

Calcite內置了許多種默認的關係式元數據實現,並以接口的形式統一維護在BuiltInMetadata抽象類裏,如下圖所示,名稱都比較直白(如RowCount就表示該RelNode查詢結果的行數)。

其中,ColumnOrigin.Handler就是負責解析列級血緣的MetadataHandler,對各類RelNode分別定義了相應的尋找起源列的方法,其結構如下圖所示。具體源碼會另外寫文章專門講解,本文先不提。

注意包括ColumnOrigin.Handler在內的絕大多數MetadataHandler都是靠ReflectiveRelMetadataProvider來發揮作用。顧名思義,ReflectiveRelMetadataProvider通過反射取得各個MetadataHandler中的方法,並在內部維護RelNode具體類型和通過Java Proxy生成的Metadata代理對象(其中包含Handler方法)的映射。這樣,通過RelMetadataQuery獲取關係式元數據時,用戶的請求就可以根據RelNode類型正確地dispatch到對應的方法上去。

另外,還有少數MetadataHandler(如CumulativeCost/NonCumulativeCost對應的Handlers)在Calcite工程裏找不到具體的實現。它們的代碼是運行時生成的,並由JaninoRelMetadataProvider做動態編譯。關於代碼生成和Janino也在計劃中,暫不贅述。

當然實際應用時我們不需要了解這些細節,只需要與RelMetadataQuery打交道。下面就來看看如何通過它取得我們想要的Flink SQL列血緣。

解析Flink SQL列級血緣

以Flink SQL任務中最爲常見的單條INSERT INTO ... SELECT ...爲例,首先我們需要取得SQL語句生成的RelNode對象,即邏輯計劃樹。

爲了方便講解,這裏筆者簡單粗暴地在o.a.f.table.api.internal.TableEnvironmentImpl類中定義了一個getInsertOperation()方法。它負責解析、驗證SQL語句,生成CatalogSinkModifyOperation,並取得它的PlannerQueryOperation子節點(即SELECT操作)。代碼如下。

public Tuple3<String, Map<String, String>, QueryOperation> getInsertOperation(String insertStmt) {
    List<Operation> operations = getParser().parse(insertStmt);
    if (operations.size() != 1) {
        throw new TableException(
                "Unsupported SQL query! getInsertOperation() only accepts a single INSERT statement.");
    }
    Operation operation = operations.get(0);
    if (operation instanceof CatalogSinkModifyOperation) {
        CatalogSinkModifyOperation sinkOperation = (CatalogSinkModifyOperation) operation;
        QueryOperation queryOperation = sinkOperation.getChild();
        return new Tuple3<>(
                sinkOperation.getTableIdentifier().asSummaryString(),
                sinkOperation.getDynamicOptions(),
                queryOperation);
    } else {
        throw new TableException("Only INSERT is supported now.");
    }
}

接下來就能夠取得Sink的表名以及對應的RelNode根節點。示例SQL來自之前的<<From Calcite to Tampering with Flink SQL>>講義。

val tableEnv = StreamTableEnvironment.create(streamEnv, EnvironmentSettings.newInstance().build())
val sql = /* language=SQL */
  s"""
     |INSERT INTO tmp.print_joined_result
     |SELECT FROM_UNIXTIME(a.ts / 1000, 'yyyy-MM-dd HH:mm:ss') AS tss, a.userId, a.eventType, a.siteId, b.site_name AS siteName
     |FROM rtdw_ods.kafka_analytics_access_log_app /*+ OPTIONS('scan.startup.mode'='latest-offset','properties.group.id'='DiveIntoBlinkExp') */ a
     |LEFT JOIN rtdw_dim.mysql_site_war_zone_mapping_relation FOR SYSTEM_TIME AS OF a.procTime AS b ON CAST(a.siteId AS INT) = b.main_site_id
     |WHERE a.userId > 7
     |""".stripMargin

val insertOp = tableEnv.asInstanceOf[TableEnvironmentImpl].getInsertOperation(sql)
val tableName = insertOp.f0
val relNode = insertOp.f2.asInstanceOf[PlannerQueryOperation].getCalciteTree

然後對取得的RelNode進行邏輯優化,即執行之前所講過的FlinkStreamProgram,但僅執行到LOGICAL_REWRITE階段爲止。我們在本地將FlinkStreamProgram複製一份,並刪去PHYSICALPHYSICAL_REWRITE兩個階段,即:

object FlinkStreamProgramLogicalOnly {

  val SUBQUERY_REWRITE = "subquery_rewrite"
  val TEMPORAL_JOIN_REWRITE = "temporal_join_rewrite"
  val DECORRELATE = "decorrelate"
  val TIME_INDICATOR = "time_indicator"
  val DEFAULT_REWRITE = "default_rewrite"
  val PREDICATE_PUSHDOWN = "predicate_pushdown"
  val JOIN_REORDER = "join_reorder"
  val PROJECT_REWRITE = "project_rewrite"
  val LOGICAL = "logical"
  val LOGICAL_REWRITE = "logical_rewrite"

  def buildProgram(config: Configuration): FlinkChainedProgram[StreamOptimizeContext] = {
    val chainedProgram = new FlinkChainedProgram[StreamOptimizeContext]()

    // rewrite sub-queries to joins
    chainedProgram.addLast(
      SUBQUERY_REWRITE,
      FlinkGroupProgramBuilder.newBuilder[StreamOptimizeContext]
        // rewrite QueryOperationCatalogViewTable before rewriting sub-queries
        .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder
          .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE)
          .setHepMatchOrder(HepMatchOrder.BOTTOM_UP)
          .add(FlinkStreamRuleSets.TABLE_REF_RULES)
          .build(), "convert table references before rewriting sub-queries to semi-join")
        .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder
          .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE)
          .setHepMatchOrder(HepMatchOrder.BOTTOM_UP)
          .add(FlinkStreamRuleSets.SEMI_JOIN_RULES)
          .build(), "rewrite sub-queries to semi-join")
        .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder
          .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION)
          .setHepMatchOrder(HepMatchOrder.BOTTOM_UP)
          .add(FlinkStreamRuleSets.TABLE_SUBQUERY_RULES)
          .build(), "sub-queries remove")
        // convert RelOptTableImpl (which exists in SubQuery before) to FlinkRelOptTable
        .addProgram(FlinkHepRuleSetProgramBuilder.newBuilder
          .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE)
          .setHepMatchOrder(HepMatchOrder.BOTTOM_UP)
          .add(FlinkStreamRuleSets.TABLE_REF_RULES)
          .build(), "convert table references after sub-queries removed")
        .build())

    // rewrite special temporal join plan
    // ...

    // query decorrelation
    // ...

    // convert time indicators
    // ...

    // default rewrite, includes: predicate simplification, expression reduction, window
    // properties rewrite, etc.
    // ...

    // rule based optimization: push down predicate(s) in where clause, so it only needs to read
    // the required data
    // ...

    // join reorder
    // ...

    // project rewrite
    // ...

    // optimize the logical plan
    chainedProgram.addLast(
      LOGICAL,
      FlinkVolcanoProgramBuilder.newBuilder
        .add(FlinkStreamRuleSets.LOGICAL_OPT_RULES)
        .setRequiredOutputTraits(Array(FlinkConventions.LOGICAL))
        .build())

    // logical rewrite
    chainedProgram.addLast(
      LOGICAL_REWRITE,
      FlinkHepRuleSetProgramBuilder.newBuilder
        .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE)
        .setHepMatchOrder(HepMatchOrder.BOTTOM_UP)
        .add(FlinkStreamRuleSets.LOGICAL_REWRITE)
        .build())

    chainedProgram
  }
}

執行FlinkStreamProgramLogicalOnly即可。注意StreamOptimizeContext內需要傳入的上下文信息,通過各種workaround取得(FunctionCatalog可以在TableEnvironmentImpl內增加一個Getter拿到)。

val logicalProgram = FlinkStreamProgramLogicalOnly.buildProgram(tableEnvConfig)

val optRelNode = logicalProgram.optimize(relNode, new StreamOptimizeContext {
  override def getTableConfig: TableConfig = tableEnv.getConfig

  override def getFunctionCatalog: FunctionCatalog = tableEnv.asInstanceOf[TableEnvironmentImpl].getFunctionCatalog

  override def getCatalogManager: CatalogManager = tableEnv.asInstanceOf[TableEnvironmentImpl].getCatalogManager

  override def getRexBuilder: RexBuilder = relNode.getCluster.getRexBuilder

  override def getSqlExprToRexConverterFactory: SqlExprToRexConverterFactory =
    relNode.getCluster.getPlanner.getContext.unwrap(classOf[FlinkContext]).getSqlExprToRexConverterFactory

  override def isUpdateBeforeRequired: Boolean = false

  override def needFinalTimeIndicatorConversion: Boolean = true

  override def getMiniBatchInterval: MiniBatchInterval = MiniBatchInterval.NONE
})

對比一下優化前與優化後的RelNode

--- Original RelNode ---
LogicalProject(tss=[FROM_UNIXTIME(/($0, 1000), _UTF-16LE'yyyy-MM-dd HH:mm:ss')], userId=[$3], eventType=[$4], siteId=[$8], siteName=[$46])
  LogicalFilter(condition=[>($3, 7)])
    LogicalCorrelate(correlation=[$cor0], joinType=[left], requiredColumns=[{8, 44}])
      LogicalProject(ts=[$0], tss=[$1], tssDay=[$2], userId=[$3], eventType=[$4], columnType=[$5], fromType=[$6], grouponId=[$7], /* ... */, procTime=[PROCTIME()])
        LogicalTableScan(table=[[hive, rtdw_ods, kafka_analytics_access_log_app]], hints=[[[OPTIONS inheritPath:[] options:{properties.group.id=DiveIntoBlinkExp, scan.startup.mode=latest-offset}]]])
      LogicalFilter(condition=[=(CAST($cor0.siteId):INTEGER, $8)])
        LogicalSnapshot(period=[$cor0.procTime])
          LogicalTableScan(table=[[hive, rtdw_dim, mysql_site_war_zone_mapping_relation]])

--- Optimized RelNode ---
FlinkLogicalCalc(select=[FROM_UNIXTIME(/(ts, 1000), _UTF-16LE'yyyy-MM-dd HH:mm:ss') AS tss, userId, eventType, siteId, site_name AS siteName])
  FlinkLogicalJoin(condition=[=($4, $6)], joinType=[left])
    FlinkLogicalCalc(select=[ts, userId, eventType, siteId, CAST(siteId) AS siteId0], where=[>(userId, 7)])
      FlinkLogicalTableSourceScan(table=[[hive, rtdw_ods, kafka_analytics_access_log_app]], fields=[ts, tss, tssDay, userId, eventType, columnType, fromType, grouponId, /* ... */, latitude, longitude], hints=[[[OPTIONS options:{properties.group.id=DiveIntoBlinkExp, scan.startup.mode=latest-offset}]]])
    FlinkLogicalSnapshot(period=[$cor0.procTime])
      FlinkLogicalCalc(select=[site_name, main_site_id])
        FlinkLogicalTableSourceScan(table=[[hive, rtdw_dim, mysql_site_war_zone_mapping_relation]], fields=[site_id, site_name, site_city_id, /* ... */])

這裏需要注意兩個問題。

其一,Calcite中RelMdColumnOrigins這個Handler類裏並沒有處理Snapshot類型的RelNode,走fallback邏輯則會對所有非葉子節點的RelNode返回空,所以默認情況下是拿不到Lookup Join字段的血緣關係的。我們還需要修改它的源碼,在遇到Snapshot時繼續深搜:

public Set<RelColumnOrigin> getColumnOrigins(Snapshot rel,
    RelMetadataQuery mq, int iOutputColumn) {
  return mq.getColumnOrigins(rel.getInput(), iOutputColumn);
}

其二,Flink使用的Calcite版本爲1.26,但是該版本不會追蹤派生列(isDerived == true,例如SUM(col))的血緣。1.27版本修復了此問題,爲避免大版本不兼容,可以將對應的issue CALCITE-4251 cherry-pick到內部的Calcite 1.26分支上來。當然別忘了重新編譯Calcite Core和Flink Table模塊。

最後就可以通過RelMetadataQuery取得結果表中字段的起源列了。So easy.

val metadataQuery = optRelNode.getCluster.getMetadataQuery

for (i <- 0 to 4) {
  val origins = metadataQuery.getColumnOrigins(optRelNode, i)
  if (origins != null) {
    for (rco <- origins) {
      val table = rco.getOriginTable
      val tableName = table.getQualifiedName.mkString(".")
      val ordinal = rco.getOriginColumnOrdinal
      val fields = table.getRowType.getFieldNames
      println(Seq(tableName, ordinal, fields.get(ordinal)).mkString("\t"))
    }
  } else {
    println("NULL")
  }
}

/* Outputs:
hive.rtdw_ods.kafka_analytics_access_log_app    0   ts
hive.rtdw_ods.kafka_analytics_access_log_app    3   userId
hive.rtdw_ods.kafka_analytics_access_log_app    4   eventType
hive.rtdw_ods.kafka_analytics_access_log_app    8   siteId
hive.rtdw_dim.mysql_site_war_zone_mapping_relation  1   site_name
*/

上面例子中的SQL語句比較簡單,因此產生的ColumnOrigin也只有單列。看官可自行用多表JOIN或者有聚合邏輯的SQL來測試,多列ColumnOrigin的情況下也很好用,免去了自行折騰RelVisitor或者RelShuttle的許多麻煩。

最後的血緣可視化這一步,普遍採用Neo4j、JanusGraph等圖數據庫承載並展示列血緣關係的數據。筆者也正在探索將Flink SQL列級血緣集成到Atlas的方法,進度比較慢,期望值請勿太高。

The End

博客荒廢良久,驚動大佬出面催更,慚愧慚愧。

受疫情影響,FFA 2021轉爲線上,不能面基真可惜(

炒雞感謝會務組發來的大禮包~

也歡迎大家屆時光臨本鶸的presentation~

民那晚安晚安。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章