SparkStreaming — 數據接收BlockGenerator源碼分析

數據接收源碼分析

  上一篇博客中分析到,Receiver數據接收主要是通過BlockGenerator來進行接收和存儲的,下面我們就源碼來對照之前的流程進行分析。
  首先是創建BlockGenerator的時候初始化的一些重要組件,如下所示:

  // blockInterval是有一個默認值的,默認是200ms,將數據封裝成block的時間間隔
  private val blockIntervalMs = conf.getTimeAsMs("spark.streaming.blockInterval", "200ms")
  require(blockIntervalMs > 0, s"'spark.streaming.blockInterval' should be a positive value")

  // 這個相當於每隔200ms,就去執行一個函數updateCurrentBuffer
  private val blockIntervalTimer =
    new RecurringTimer(clock, blockIntervalMs, updateCurrentBuffer, "BlockGenerator")
  // blocksForPushing隊列的長度是可以調節的,默認是長度是10
  private val blockQueueSize = conf.getInt("spark.streaming.blockQueueSize", 10)
  // blocksForPushing隊列
  private val blocksForPushing = new ArrayBlockingQueue[Block](blockQueueSize)
  // blockPushingThread後臺線程,啓動之後,就會調用keepPushingBlocks()方法
  // 這個方法中就會每隔一段時間,去blocksForPushing隊列中取block
  private val blockPushingThread = new Thread() { override def run() { keepPushingBlocks() } }

  // 創建currentBuffer,用於存放原始數據
  @volatile private var currentBuffer = new ArrayBuffer[Any]

上面是一些比較重要的參數解釋如下:
blockInterval:默認是200ms,它是控制block的生成時間間隔
blockIntervalTimer:這是個定時器,定期的將currentBuffer中的數據生成Block
blockQueueSize:保存block的隊列的長度
blocksForPushing:保存block的隊列
blockPushingThread:這是一個線程,用於將生成的Block進行推送,保存到BlockManager中。
currentBuffer:保存來自數據源的一條一條的數據,這是一個緩存。

BlockGenerator的start()方法
// 啓動BlockGenerator,其實就是啓動內部兩個關鍵後臺線程
  // 一個是blockIntervalTimer,負責將currentBuffer中的原始數據,打包成一個一個的block
  // 一個是blockPushingThread,負責將blocksForPushing中的block,調用pushArrayBuffer()方法
  def start(): Unit = synchronized {
    if (state == Initialized) {
      state = Active
      blockIntervalTimer.start()
      blockPushingThread.start()
      logInfo("Started BlockGenerator")
    } else {
      throw new SparkException(
        s"Cannot start BlockGenerator as its not in the Initialized state [state = $state]")
    }
  }

  從上面代碼可以很清楚的看到,start方法中只是將定時器和線程啓動了,當啓動定時器的時候,它就會每隔200ms,將currentBuffer中的數據取出,並生成一個Block;而blockPushingThread線程則是將隊列中的數據推送到BlockMananger。下面我們先看定時器啓動的時候會調用的函數updateCurrentBuffer。

blockIntervalTimer的updateCurrentBuffer
/** Change the buffer to which single records are added to. */
  private def updateCurrentBuffer(time: Long): Unit = {
    try {
      var newBlock: Block = null
      synchronized {
        if (currentBuffer.nonEmpty) {
          // 可以看到先將currentBuffer數據複製給newBlockBuffer,然後清空currentBuffer
          val newBlockBuffer = currentBuffer
          currentBuffer = new ArrayBuffer[Any]
          // 生成一個唯一的blockId,根據時間創建的
          val blockId = StreamBlockId(receiverId, time - blockIntervalMs)
          // 目前這個操作是空着的
          listener.onGenerateBlock(blockId)
          // 創建一個block
          newBlock = new Block(blockId, newBlockBuffer)
        }
      }

      // 將block推入blocksForPushing隊列中
      if (newBlock != null) {
        blocksForPushing.put(newBlock)  // put is blocking when queue is full
      }
    } catch {
      case ie: InterruptedException =>
        logInfo("Block updating timer thread was interrupted")
      case e: Exception =>
        reportError("Error in block updating thread", e)
    }
  }

  從上面代碼中看出,加了synchronized 關鍵字,防止寫併發問題。首先將currentBuffer的數據複製給newBlockBuffer,接着就重新創建currentBuffer,這就類似之前的數據被清空。然後依據時間生成一個唯一的blockId,然後創建一個Block,接着將創建好的block加入到blocksForPushing隊列中。
  下面我們再看一下blockPushingThread線程的執行邏輯,它會調用keepPushingBlocks。

blockPushingThread的keepPushingBlocks
private def keepPushingBlocks() {
    logInfo("Started block pushing thread")
	// 先看一下當前BlockGenerator是否還在運行
    def areBlocksBeingGenerated: Boolean = synchronized {
      state != StoppedGeneratingBlocks
    }

    try {
      // 只要block持續在產生,那麼就會一直去blocksForPushing隊列中取block
      while (areBlocksBeingGenerated) {
        // 從blocksForPushing隊列中,poll出來了當前隊列隊首的block
        // 對於阻塞隊列,默認設置10ms的超時
        Option(blocksForPushing.poll(10, TimeUnit.MILLISECONDS)) match {
          // 如果拿到block,調用pushBlock
          case Some(block) => pushBlock(block)
          case None =>
        }
      }

      // At this point, state is StoppedGeneratingBlock. So drain the queue of to-be-pushed blocks.
      logInfo("Pushing out the last " + blocksForPushing.size() + " blocks")
      while (!blocksForPushing.isEmpty) {
        val block = blocksForPushing.take()
        logDebug(s"Pushing block $block")
        pushBlock(block)
        logInfo("Blocks left to push " + blocksForPushing.size())
      }
      logInfo("Stopped block pushing thread")
    } catch {
      case ie: InterruptedException =>
        logInfo("Block pushing thread was interrupted")
      case e: Exception =>
        reportError("Error in block pushing thread", e)
    }
  }

  從上面代碼中可以看出,只要BlockGenerator一直在運行沒有停止,它就會持續不斷的產生Block,那麼這裏就會從blocksForPushing隊列中持續不斷的去取Block進行推送。這裏的blocksForPushing是一個阻塞隊列,默認阻塞時間是10ms。
  從blocksForPushing隊列中取出的block會被推送,推送是通過BlockGeneratorListener的onPushBlock進行推送的,而onPushBlock()方法中則調用了pushArrayBuffer,將推送來的Block,而onPushBlock()方法最終調用了pushAndReportBlock()方法,下面我們分析這個方法:

ReceiverSupervisorImpl的pushAndReportBlock推送block
def pushAndReportBlock(
      receivedBlock: ReceivedBlock,
      metadataOption: Option[Any],
      blockIdOption: Option[StreamBlockId]
    ) {
    // 取出BlockId
    val blockId = blockIdOption.getOrElse(nextBlockId)
    // 獲取當前系統時間
    val time = System.currentTimeMillis
    // 這裏使用receivedBlockHandler,調用storeBlock方法,將block存儲到BlockManager中
    // 從這裏的源碼裏可以看到預寫日誌機制
    val blockStoreResult = receivedBlockHandler.storeBlock(blockId, receivedBlock)
    logDebug(s"Pushed block $blockId in ${(System.currentTimeMillis - time)} ms")
    // 拿到block數據長度
    val numRecords = blockStoreResult.numRecords
    // 封裝一個ReceivedBlockInfo對象,裏面包含streamId 和 block store結果
    val blockInfo = ReceivedBlockInfo(streamId, numRecords, metadataOption, blockStoreResult)
    // 調用ReceiverTrackerEndPoint,向ReceiverTracker發送AddBlock消息
    trackerEndpoint.askWithRetry[Boolean](AddBlock(blockInfo))
    logDebug(s"Reported block $blockId")
  }

  這個方法主要包含了兩個功能,一個是調用receivedBlockHandler的storeBlock將Block保存到BlockManager(或寫入預寫日誌);另一個就是將保存的Block信息封裝爲ReceivedBlockInfo,發送給ReceiverTracker。下面我們先分析第一個:
  存儲block的組件receivedBlockHandler會依據是否開啓預寫日誌功能,而創建不同的receivedBlockHandler,如下所示:

private val receivedBlockHandler: ReceivedBlockHandler = {
    // 如果開啓了預寫日誌機制,默認是false(這裏參數是 spark.streaming.receiver.writeAheadLog.enable)
    // 如果爲true,那麼ReceivedBlockHandler就是WriteAheadLogBasedBlockHandler,
    // 如果沒有開啓預寫日誌機制,那麼就創建爲BlockManagerBasedBlockHandler
    if (WriteAheadLogUtils.enableReceiverLog(env.conf)) {
      if (checkpointDirOption.isEmpty) {
        throw new SparkException(
          "Cannot enable receiver write-ahead log without checkpoint directory set. " +
            "Please use streamingContext.checkpoint() to set the checkpoint directory. " +
            "See documentation for more details.")
      }
      new WriteAheadLogBasedBlockHandler(env.blockManager, receiver.streamId,
        receiver.storageLevel, env.conf, hadoopConf, checkpointDirOption.get)
    } else {
      new BlockManagerBasedBlockHandler(env.blockManager, receiver.storageLevel)
    }
  }

  它會判斷是否開啓了預寫日誌,通過讀取spark.streaming.receiver.writeAheadLog.enable這個參數是否被設置爲true。如果開啓了那麼就創建WriteAheadLogBasedBlockHandler,否則的話就創建BlockManagerBasedBlockHandler。
  下面我們就WriteAheadLogBasedBlockHandler來進行分析它的storeBlock方法:

def storeBlock(blockId: StreamBlockId, block: ReceivedBlock): ReceivedBlockStoreResult = {
    var numRecords = None: Option[Long]
    // 先將Block的數據序列化
    val serializedBlock = block match {
      case ArrayBufferBlock(arrayBuffer) =>
        numRecords = Some(arrayBuffer.size.toLong)
        blockManager.dataSerialize(blockId, arrayBuffer.iterator)
      case IteratorBlock(iterator) =>
        val countIterator = new CountingIterator(iterator)
        val serializedBlock = blockManager.dataSerialize(blockId, countIterator)
        numRecords = countIterator.count
        serializedBlock
      case ByteBufferBlock(byteBuffer) =>
        byteBuffer
      case _ =>
        throw new Exception(s"Could not push $blockId to block manager, unexpected block type")
    }

    // 將數據保存到BlockManager中去,這裏可以看出,默認的持久化策略是帶 _SER 和 _2的
    // 會序列化,以及複製一份副本到其他executor的BlockManager上,以供容錯
    val storeInBlockManagerFuture = Future {
      val putResult =
        blockManager.putBytes(blockId, serializedBlock, effectiveStorageLevel, tellMaster = true)
      if (!putResult.map { _._1 }.contains(blockId)) {
        throw new SparkException(
          s"Could not store $blockId to block manager with storage level $storageLevel")
      }
    }

    // 將Block存入預寫日誌,使用Future來獲取寫入結果
    val storeInWriteAheadLogFuture = Future {
      writeAheadLog.write(serializedBlock, clock.getTimeMillis())
    }

    // 等待兩個寫入完成,併合並寫入結果信息,並返回寫入結果信息
    val combinedFuture = storeInBlockManagerFuture.zip(storeInWriteAheadLogFuture).map(_._2)
    val walRecordHandle = Await.result(combinedFuture, blockStoreTimeout)
    WriteAheadLogBasedStoreResult(blockId, numRecords, walRecordHandle)
  }

  從上面代碼中看出,主要分爲兩步:首先將Block的數據進行序列化,然後將其放入BlockManager中進行存儲,這裏可以看出,默認的持久化級別是帶有 _SER 和 _2的,它會序列化並複製一份到其他Executor的BlockManager上。這裏就可以看出開啓預寫日誌的容錯措施首先會將數據複製一份到其他的Worker節點的executor的BlockManager上;接着將Block的數據寫入預寫日誌中(一般是HDFS文件)。
  從上面可以看出預寫日誌的容錯措施主要有兩個:一是將數據備份到其他的Worker節點的executor上(默認持久化級別是_SER 和 _2);再者將數據寫入到預寫日誌中。相當於提供了雙重保障,因此能夠提供較強的容錯性(當然這會犧牲一定的性能)。
  接着我們分析第二個,發送ReceivedBlockInfo信息給ReceiverTracker。這個就簡單說一下,ReceiverTracker在收到AddBlock的消息之後,會進行判斷是否開啓預寫日誌,假如開啓預寫日誌那麼需要將Block的信息寫入一份到預寫日誌中,否則的話,就保存在緩存中。
  總結一下:上面的數據接收和存儲功能,依據BlockGenerator組件來對接收到的數據進行緩存、封裝和推送,最終將數據推送到BlockManager(以及預寫日誌中)。其中,主要是依靠一個定時器blockIntervalTimer,每隔200ms,從currentBuffer中取出全部數據,封裝爲一個block,放入blocksForPushing隊列中;接着blockPushingThread,不斷的從blocksForPushing隊列中取出block進行推送,這是一個阻塞隊列阻塞時間默認是10ms。然後通過BlockGeneratorListener的onPushBlock()(最終調用的是pushArrayBuffer),將數據進行推送到BlockManager(加入開啓了預寫日誌,那麼也會寫入一份到預寫日誌中),以及發送AddBlock消息給ReceiverTracker進行Block的註冊。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章