DStream與RDD關係

RDD是怎麼生成的?

RDD依靠什麼生成?根據DStream來的

RDD生成的依據是什麼?

Spark Streaming中RDD的執行是否和Spark Core中的RDD執行有所不同?

運行之後我們對RDD怎麼處理?

ForEachDStream不一定會觸發Job的執行,但是它一定會觸發job的產生,和Job是否執行沒有關係;

問:RDD依靠什麼生成的?

      下面以案例來研究RDD是依靠DStream產生的

  def main(args: Array[String]): Unit = {
    val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("wordcount")
    val sc = new StreamingContext(conf, Duration(3000))
    val streamDs: ReceiverInputDStream[String] = sc.socketTextStream("localhost", 9999)//輸入的DStream
    val words: DStream[String] = streamDs.flatMap(_.split(" "))//輸入和輸出之間的都是transformation的DStream
    val word: DStream[(String, Int)] = words.map((_, 1))
    val wordCount: DStream[(String, Int)] = word.reduceByKey((x, y) => {
      x + y
    })
    wordCount.print()// 內部會導致Action級別的觸發  print()輸出的DStream
    sc.start()
    sc.awaitTermination()

  }

從代碼中分析代碼中此案例依次產生了如下DStream,並且它們是從後往前依賴的:

ReceiverInputDStream-》new FlatMappedDStream(this, context.sparkContext.clean(flatMapFunc))-》new MappedDStream(this, context.sparkContext.clean(mapFunc))-》new ShuffledDStream[K, V, C](self,cleanedCreateCombiner,cleanedMergeValue,cleanedMergeCombiner,partitioner, mapSideCombine)-》ForEachDStream
簡化一下就是:ReceiverInputDStream-->FlatMappedDStream-->MappedDStream-->ShuffledDStream-->ForEachDStream
那麼怎麼證明Stream之間是相互依賴的,我們挑選一個Dstream爲入口進行分析,比如MappedDStream:
  /** Return a new DStream by applying a function to all elements of this DStream. */
  def map[U: ClassTag](mapFunc: T => U): DStream[U] = ssc.withScope {
    new MappedDStream(this, context.sparkContext.clean(mapFunc))
  }

MappedDStream類中
private[streaming]
class MappedDStream[T: ClassTag, U: ClassTag] (
    parent: DStream[T],
    mapFunc: T => U
  ) extends DStream[U](parent.ssc) {

  override def dependencies: List[DStream[_]] = List(parent)

  override def slideDuration: Duration = parent.slideDuration

  override def compute(validTime: Time): Option[RDD[U]] = {
    parent.getOrCompute(validTime).map(_.map[U](mapFunc))
  MappedDStream類中的compute方法會獲取parent dstream,然後基於其結果進行map操作,mapFunc就是我們需要傳入的業務邏輯,這就證明了dstream的依賴關係 } }
問:DStream爲什麼要從後往前依賴呢?
因爲DStream代表Spark Streaming業務邏輯,RDD是從後往前依賴的,DStream是lazy級別的。DStream的依賴關係必須和RDD的依賴關係保持高度一致
A DStream internally is characterized by a few basic properties:
A list of other DStreams that the DStream depends on
A time interval at which the DStream generates an RDD
A function that is used to generate an RDD after each time interval
大致意思是:

   1.DStream依賴於其他DStream,除了第一個DStream,因爲第一個DStream基於數據源產生,用於接收數據,所以無其他依賴;進一步證明了DStream是從後往前依賴!!

  2.基於DStream怎麼產生RDD?每隔BatchDuration,DStream生成一個RDD;

   3.每隔BatchDuration,DStream內部函數會生成RDD

abstract class DStream[T: ClassTag] (
@transient private[streaming] var ssc: StreamingContext
) extends Serializable with Logging {
// RDDs generated, marked as private[streaming] so that testsuites can access it
//DStream是RDD的模板,每隔一個batchInterval會根據DStream模板生成一個對應的RDD。然後將RDD存儲到DStream中的generatedRDDs數據結構中
@transient
private[streaming] var generatedRDDs = new HashMap[Time, RDD[T]]()

}
 

到此,我們驗證了RDD是DStream是產生的結論!

下一節我們分析DStream是到底怎麼生成RDD的?

stream中RDD的生成:

//DStream是RDD的模板,每隔一個batchInterval會根據DStream模板生成一個對應的RDD。然後將RDD存儲到DStream中的generatedRDDs數據結構中
 @transient
 private[streaming] var generatedRDDs = new HashMap[Time, RDD[T]] ()  generatedRDDs在哪裏被實例化的?搞清楚了這裏的HashMap在哪裏被實例化的話,就知道RDD是怎麼產生的

1.進入DStream的getOrCompute方法:

  /**
   * Get the RDD corresponding to the given time; either retrieve it from cache
   * or compute-and-cache it.先根據時間判斷HashMap中是否已存在該時間對應的RDD,如果沒有則調用compute得到RDD,並放入到HashMap中
   */
  private[streaming] final def getOrCompute(time: Time): Option[RDD[T]] = {
    // If RDD was already generated, then retrieve it from HashMap,
    // or else compute the RDD/看緩存中是否有,有的話直接獲取
    generatedRDDs.get(time).orElse {
      // Compute the RDD if time is valid (e.g. correct time in a sliding window)
      // of RDD generation, else generate nothing.
      if (isTimeValid(time)) {

        val rddOption = createRDDWithLocalProperties(time, displayInnerRDDOps = false) {
          // Disable checks for existing output directories in jobs launched by the streaming
          // scheduler, since we may need to write output to an existing directory during checkpoint
          // recovery; see SPARK-4835 for more details. We need to have this call here because
          // compute() might cause Spark jobs to be launched.
          SparkHadoopWriterUtils.disableOutputSpecValidation.withValue(true) {
            compute(time)
      //根據時間計算產生RDD } }     //rddOption裏面有RDD生成的邏輯,然後生成的RDD,會put到generatedRDDs中 rddOption.foreach {
case newRDD => // Register the generated RDD for caching and checkpointing if (storageLevel != StorageLevel.NONE) { newRDD.persist(storageLevel) logDebug(s"Persisting RDD ${newRDD.id} for time $time to $storageLevel") } if (checkpointDuration != null && (time - zeroTime).isMultipleOf(checkpointDuration)) { newRDD.checkpoint() logInfo(s"Marking RDD ${newRDD.id} for time $time for checkpointing") } generatedRDDs.put(time, newRDD) } rddOption } else { None } } }

進入compute方法,發現其並沒有具體的實現,說明在其子類中有重寫並生成rdd

/** Method that generates a RDD for the given time */
def compute(validTime: Time): Option[RDD[T]]

2.進入ReceiverInputDStream的compute方法

  /**
   * Generates RDDs with blocks received by the receiver of this stream. */
  override def compute(validTime: Time): Option[RDD[T]] = {
    val blockRDD = {

      if (validTime < graph.startTime) {
        // If this is called for any time before the start time of the context,
        // then this returns an empty RDD. This may happen when recovering from a
        // driver failure without any write ahead log to recover pre-failure data.
        new BlockRDD[T](ssc.sc, Array.empty)
      } else {
        // Otherwise, ask the tracker for all the blocks that have been allocated to this stream
        // for this batch
        val receiverTracker = ssc.scheduler.receiverTracker
        val blockInfos = receiverTracker.getBlocksOfBatch(validTime).getOrElse(id, Seq.empty)

        // Register the input blocks information into InputInfoTracker
        val inputInfo = StreamInputInfo(id, blockInfos.flatMap(_.numRecords).sum)
        ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)

        // Create the BlockRDD
// 創建並返回BlockRDD,由於ReceiverInputDStream沒有父依賴,所以自己生成RDD。<br>        // 如果沒有輸入數據會產生一系列空的RDD<br>        createBlockRDD(validTime, blockInfos)
createBlockRDD(validTime, blockInfos) } } Some(blockRDD) }

 

注意:Spark Streaming實際上在沒有輸入數據的時候仍然會產生RDD(空的BlockRDD),所以可以在此修改源碼,提升性能。反過來仔細思考一下,流處理實際上就是時間極短的情況下完成的批處理!!

3.再進入MappedDStream的compute方法

class MappedDStream[T: ClassTag, U: ClassTag] (
    parent: DStream[T],
    mapFunc: T => U
  ) extends DStream[U](parent.ssc) {
  <br>  //除了第一個DStream產生RDD之外,其他的DStream都是從前面DStream產生的RDD開始計算
  override def dependencies: List[DStream[_]] = List(parent)
 
  override def slideDuration: Duration = parent.slideDuration
   
  override def compute(validTime: Time): Option[RDD[U]] = {   
}

    // getOrCompute是對RDD進行操作,後面的map就是對RDD進行操作
    // DStream裏面的計算其實是對RDD進行計算,而mapFunc就是我們要操作的具體業務邏輯

 
  parent.getOrCompute(validTime).map(_.map[U](mapFunc))
}

 


4.進入ForEachDStream的compute的方法:


  發現其compute方法沒有任何操作,但是重寫了generateJob方法!

private[streaming]
class ForEachDStream[T: ClassTag] (
    parent: DStream[T],
    foreachFunc: (RDD[T], Time) => Unit,
    displayInnerRDDOps: Boolean
  ) extends DStream[Unit](parent.ssc) {
 
  override def dependencies: List[DStream[_]] = List(parent)
 
  override def slideDuration: Duration = parent.slideDuration
 
  override def compute(validTime: Time): Option[RDD[Unit]] = None
 
  override def generateJob(time: Time): Option[Job] = {
    parent.getOrCompute(time) match {
      case Some(rdd) =>
        val jobFunc = () => createRDDWithLocalProperties(time, displayInnerRDDOps) {
          foreachFunc(rdd, time)
        }        Some(new Job(time, jobFunc))
      case None => None
    }
  }
}

    //此時考慮jobFunc中一定有action操作
              //因此jobFunc被調用的時候就會觸發action操

5.從Job生成入手,JobGenerator的generateJobs方法,內部調用的DStreamGraph的generateJobs方法:

/** Generate jobs and perform checkpoint for the given `time`.  */
private def generateJobs(time: Time) {
  // Set the SparkEnv in this thread, so that job generation code can access the environment
  // Example: BlockRDDs are created in this thread, and it needs to access BlockManager
  // Update: This is probably redundant after threadlocal stuff in SparkEnv has been removed.
  SparkEnv.set(ssc.env)
  Try {<br>      <em>//根據特定的時間獲取具體的數據</em>
    jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch<br><em>      //調用DStreamGraph的generateJobs生成Job</em>
    graph.generateJobs(time) // generate jobs using allocated block
  } match {
    case Success(jobs) =>
      val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time)
      jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos))
    case Failure(e) =>
      jobScheduler.reportError("Error generating jobs for time " + time, e)
  }
  eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false))
}

 DStreamGraph的generateJobs方法調用了OutputStream的generateJob方法,OutputStream就是ForEachDStream:

def generateJobs(time: Time): Seq[Job] = {
   logDebug("Generating jobs for time " + time)
   val jobs = this.synchronized {
     outputStreams.flatMap { outputStream =>
       val jobOption = outputStream.generateJob(time)
       jobOption.foreach(_.setCallSite(outputStream.creationSite))
       jobOption
     }
   }
   logDebug("Generated " + jobs.length + " jobs for time " + time)
   jobs
 }

總結:DStream是RDD的模板,其內部generatedRDDs 保存了每個BatchDuration時間生成的RDD對象實例。DStream的依賴構成了RDD依賴關係,即從後往前計算時,只要對最後一個DStream計算即可。JobGenerator每隔BatchDuration調用DStreamGraph的generateJobs方法,調用了ForEachDStream的generateJob方法,其內部先調用父DStream的getOrCompute方法來獲取RDD,然後在進行計算,從後往前推,第一個DStream是ReceiverInputDStream,其comput方法中從receiverTracker中獲取對應時間段的metadata信息,然後生成BlockRDD對象,並放入到generatedRDDs中!!

轉載:https://www.cnblogs.com/game-bigdata/p/5521660.html



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章