【spark系列4】spark 3.0.1集成delta 0.7.0原理解析--delta自定義sql

前提

本文基於 spark 3.0.1
delta 0.7.0
我們都知道delta.io是一個給數據湖提供可靠性的開源存儲層的軟件,關於他的用處,可以參考Delta Lake,讓你從複雜的Lambda架構中解放出來,於此類似的產品有hudi,Iceberg,因爲delta無縫集成spark,所以我們來分析一下delta集成spark的內部原理以及框架,對於spark 3.x 與delta的集成是分兩部分的,一部分是delta自定義的sql語法,另一部分是基於Catalog plugin API的DDL DML sql操作(spark 3.x以前是不支持的)
我們今天先分析第一部分 delta自定義的sql語法


自定義的DeltaDataSource

我們在用delta的時候,得指定delta特定的格式,如下:

val data = spark.range(5, 10)
data.write.format("delta").mode("overwrite").save("/tmp/delta-table")
df.show()

那這個delta datasource是怎麼集成到spark呢?我們來分析一下:
直接到DataStreamWriter,如下:

 val cls = DataSource.lookupDataSource(source, df.sparkSession.sessionState.conf)
      val disabledSources = df.sparkSession.sqlContext.conf.disabledV2StreamingWriters.split(",")
      val useV1Source = disabledSources.contains(cls.getCanonicalName) ||
        // file source v2 does not support streaming yet.
        classOf[FileDataSourceV2].isAssignableFrom(cls)

DataSource.lookupDataSource 方法是關鍵點。如下:

def lookupDataSource(provider: String, conf: SQLConf): Class[_] = {
    val provider1 = backwardCompatibilityMap.getOrElse(provider, provider) match {
      case name if name.equalsIgnoreCase("orc") &&
          conf.getConf(SQLConf.ORC_IMPLEMENTATION) == "native" =>
        classOf[OrcDataSourceV2].getCanonicalName
      case name if name.equalsIgnoreCase("orc") &&
          conf.getConf(SQLConf.ORC_IMPLEMENTATION) == "hive" =>
        "org.apache.spark.sql.hive.orc.OrcFileFormat"
      case "com.databricks.spark.avro" if conf.replaceDatabricksSparkAvroEnabled =>
        "org.apache.spark.sql.avro.AvroFileFormat"
      case name => name
    }
    val provider2 = s"$provider1.DefaultSource"
    val loader = Utils.getContextOrSparkClassLoader
    val serviceLoader = ServiceLoader.load(classOf[DataSourceRegister], loader)

這裏用到了ServiceLoader.load的方法,該是java的SPI,具體的細節可以網上查閱,我們說重點
直接找到ServiceLoader.LazyIterator部分

private class LazyIterator
        implements Iterator<S>
    {

        Class<S> service;
        ClassLoader loader;
        Enumeration<URL> configs = null;
        Iterator<String> pending = null;
        String nextName = null;

        private LazyIterator(Class<S> service, ClassLoader loader) {
            this.service = service;
            this.loader = loader;
        }

        private boolean hasNextService() {
            if (nextName != null) {
                return true;
            }
            if (configs == null) {
                try {
                    String fullName = PREFIX + service.getName();
                    if (loader == null)
                        configs = ClassLoader.getSystemResources(fullName);
                    else
                        configs = loader.getResources(fullName);
                } catch (IOException x) {
                    fail(service, "Error locating configuration files", x);
                }
            }

其中的loader.getResources方法,就是查找classpath下的特定文件,如果有多個就會返回多個,
對於spark來說,查找的是class DataSourceRegister,也就是META-INF/services/org.apache.spark.sql.sources.DataSourceRegister文件,實際上spark內部的datasource的實現,通過通過這種方式加載進來的

我們查看一下delta的META-INF/services/org.apache.spark.sql.sources.DataSourceRegister文件爲org.apache.spark.sql.delta.sources.DeltaDataSource,注意DeltaDatasource是基於Datasource v1進行開發的,
至此我們就知道了delta datasource和spark結合的大前提的實現

分析

我們從delta的configurate sparksession入手,如下:

import org.apache.spark.sql.SparkSession

val spark = SparkSession
  .builder()
  .appName("...")
  .master("...")
  .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
  .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
  .getOrCreate()

我們可以看到 config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
spark configuration,我們可以看到對該spark.sql.extensions的解釋是

A comma-separated list of classes that implement Function1[SparkSessionExtensions, Unit] used to configure Spark Session extensions. The classes must have a no-args constructor. If multiple extensions are specified, they are applied in the specified order. For the case of rules and planner strategies, they are applied in the specified order. For the case of parsers, the last parser is used and each parser can delegate to its predecessor. For the case of function name conflicts, the last registered function name is used.

一句話就是用來對sparksession的擴展,可以對spark sql的邏輯計劃進行擴展,且這個功能從spark 2.2.0就有了
看一下io.delta.sql.DeltaSparkSessionExtension類

class DeltaSparkSessionExtension extends (SparkSessionExtensions => Unit) {
  override def apply(extensions: SparkSessionExtensions): Unit = {
    extensions.injectParser { (session, parser) =>
      new DeltaSqlParser(parser)
    }
    extensions.injectResolutionRule { session =>
      new DeltaAnalysis(session, session.sessionState.conf)
    }
    extensions.injectCheckRule { session =>
      new DeltaUnsupportedOperationsCheck(session)
    }
    extensions.injectPostHocResolutionRule { session =>
      new PreprocessTableUpdate(session.sessionState.conf)
    }
    extensions.injectPostHocResolutionRule { session =>
      new PreprocessTableMerge(session.sessionState.conf)
    }
    extensions.injectPostHocResolutionRule { session =>
      new PreprocessTableDelete(session.sessionState.conf)
    }
  }
}

DeltaSqlParser class就是delta對於自身語法的支持,那到底怎麼支持以及支持什麼呢?
我們看一下extensions.injectParser代碼

 private[this] val parserBuilders = mutable.Buffer.empty[ParserBuilder]

  private[sql] def buildParser(
      session: SparkSession,
      initial: ParserInterface): ParserInterface = {
    parserBuilders.foldLeft(initial) { (parser, builder) =>
      builder(session, parser)
    }
  }

  /**
   * Inject a custom parser into the [[SparkSession]]. Note that the builder is passed a session
   * and an initial parser. The latter allows for a user to create a partial parser and to delegate
   * to the underlying parser for completeness. If a user injects more parsers, then the parsers
   * are stacked on top of each other.
   */
  def injectParser(builder: ParserBuilder): Unit = {
    parserBuilders += builder
  }

我們看到buildParser方法對我們傳入的DeltaSqlParser進行了方法的初始化,也就是說DeltaSqlParser 的delegate變量被賦值爲initial,
而該buildParser方法 被BaseSessionStateBuilder調用:

 /**
   * Parser that extracts expressions, plans, table identifiers etc. from SQL texts.
   *
   * Note: this depends on the `conf` field.
   */
  protected lazy val sqlParser: ParserInterface = {
    extensions.buildParser(session, new SparkSqlParser(conf))
  }

所以說initial的實參是SparkSqlParser,也就是SparkSqlParser成了DeltaSqlParser代理,我們再看看DeltaSqlParser的方法:

override def parsePlan(sqlText: String): LogicalPlan = parse(sqlText) { parser =>
    builder.visit(parser.singleStatement()) match {
      case plan: LogicalPlan => plan
      case _ => delegate.parsePlan(sqlText)
    }
  }

這裏涉及到了antlr4的語法,也就是說對於邏輯計劃的解析,如自身DeltaSqlParser能夠解析,就進行解析,不能的話就委託給SparkSqlParser進行解析,而解析是該類DeltaSqlAstBuilder的功能:

class DeltaSqlAstBuilder extends DeltaSqlBaseBaseVisitor[AnyRef] {

  /**
   * Create a [[VacuumTableCommand]] logical plan. Example SQL:
   * {
  
  {
  
  {
   *   VACUUM ('/path/to/dir' | delta.`/path/to/dir`) [RETAIN number HOURS] [DRY RUN];
   * }}}
   */
  override def visitVacuumTable(ctx: VacuumTableContext): AnyRef = withOrigin(ctx) {
    VacuumTableCommand(
      Option(ctx.path).map(string),
      Option(ctx.table).map(visitTableIdentifier),
      Option(ctx.number).map(_.getText.toDouble),
      ctx.RUN != null)
  }

  override def visitDescribeDeltaDetail(
      ctx: DescribeDeltaDetailContext): LogicalPlan = withOrigin(ctx) {
    DescribeDeltaDetailCommand(
      Option(ctx.path).map(string),
      Option(ctx.table).map(visitTableIdentifier))
  }

  override def visitDescribeDeltaHistory(
      ctx: DescribeDeltaHistoryContext): LogicalPlan = withOrigin(ctx) {
    DescribeDeltaHistoryCommand(
      Option(ctx.path).map(string),
      Option(ctx.table).map(visitTableIdentifier),
      Option(ctx.limit).map(_.getText.toInt))
  }

  override def visitGenerate(ctx: GenerateContext): LogicalPlan = withOrigin(ctx) {
    DeltaGenerateCommand(
      modeName = ctx.modeName.getText,
      tableId = visitTableIdentifier(ctx.table))
  }

  override def visitConvert(ctx: ConvertContext): LogicalPlan = withOrigin(ctx) {
    ConvertToDeltaCommand(
      visitTableIdentifier(ctx.table),
      Option(ctx.colTypeList).map(colTypeList => StructType(visitColTypeList(colTypeList))),
      None)
  }

  override def visitSingleStatement(ctx: SingleStatementContext): LogicalPlan = withOrigin(ctx) {
    visit(ctx.statement).asInstanceOf[LogicalPlan]
  }

  protected def visitTableIdentifier(ctx: QualifiedNameContext): TableIdentifier = withOrigin(ctx) {
    ctx.identifier.asScala match {
      case Seq(tbl) => TableIdentifier(tbl.getText)
      case Seq(db, tbl) => TableIdentifier(tbl.getText, Some(db.getText))
      case _ => throw new ParseException(s"Illegal table name ${ctx.getText}", ctx)
    }
  }

  override def visitPassThrough(ctx: PassThroughContext): LogicalPlan = null
}

那這些方法比如visitVacuumTable,visitDescribeDeltaDetail是從哪裏來的呢?
咱們看看DeltaSqlBase.g4:

singleStatement
    : statement EOF
    ;

// If you add keywords here that should not be reserved, add them to 'nonReserved' list.
statement
    : VACUUM (path=STRING | table=qualifiedName)
        (RETAIN number HOURS)? (DRY RUN)?                               #vacuumTable
    | (DESC | DESCRIBE) DETAIL (path=STRING | table=qualifiedName)      #describeDeltaDetail
    | GENERATE modeName=identifier FOR TABLE table=qualifiedName        #generate
    | (DESC | DESCRIBE) HISTORY (path=STRING | table=qualifiedName)
        (LIMIT limit=INTEGER_VALUE)?                                    #describeDeltaHistory
    | CONVERT TO DELTA table=qualifiedName
        (PARTITIONED BY '(' colTypeList ')')?                           #convert
    | .*?                                                               #passThrough
    ;

這裏涉及到的antlr4語法,不會的可以自行網上查閱。注意一下spark 和delta用到的都是visit的模式。
再來對於一下delta官網提供的操作 :

Vacuum
Describe History
Describe Detail
Generate
Convert to Delta
Convert Delta table to a Parquet table

這樣就能對應上了,如Vacuum操作對應vacuumTable,Convert to Delta對應 convert.

其實delta支持拓展了spark,我們也可按照delta的方式,對spark進行擴展,從而實現自己的sql語法

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章