博客地址: http://blog.csdn.net/yueqian_zhu/
前面的兩節內容介紹了StreamingContext的構造以及在此上的一系列操作。
通過調用start方法,真正開始調度執行。首先校驗狀態是否是INITIALIZED,然後調用JobScheduler的start方法,並將狀態設置爲ACTIVE。
看一下JobScheduler的start方法內部
def start(): Unit = synchronized {
if (eventLoop != null) return // scheduler has already been started
logDebug("Starting JobScheduler")
eventLoop = new EventLoop[JobSchedulerEvent]("JobScheduler") {
override protected def onReceive(event: JobSchedulerEvent): Unit = processEvent(event)
override protected def onError(e: Throwable): Unit = reportError("Error in job scheduler", e)
}
eventLoop.start()
listenerBus.start(ssc.sparkContext)
receiverTracker = new ReceiverTracker(ssc)
inputInfoTracker = new InputInfoTracker(ssc)
receiverTracker.start()
jobGenerator.start()
logInfo("Started JobScheduler")
}
1、首先構造一個事件類型爲[JobSchedulerEvent]的循環器eventLoop(包含JobStarted,JobCompleted,ErrorReported三個事件),內部有一個線程實時獲取隊列中的事件,有則處理。實際調用如上的onReceive/onError方法。eventLoop.start後,內部線程真正運行起來,並等待事件的到來。
2、構造ReceiverTracker
(1)從DStreamGraph中獲取註冊的ReceiverInputStreams
(2)獲取所有ReceiverInputStreams的streamId
(3)構造一個ReceiverLauncher,它是一個接受器
(4)構造一個ReceivedBlockTracker,
用於維護所有的接收器(receiver)接收到的所有block信息,即ReceivedBlockInfo
3、調用receiverTracker的start方法。
如果receiverInputStreams不爲空,則建立akka RPC服務,名稱爲ReceiverTracker,負責註冊Receiver、AddBlock、ReportError(報告錯誤)、註銷Receiver四個事件
調用receiverExecutor的start方法,最終調用了startReceivers方法。
/**
* Get the receivers from the ReceiverInputDStreams, distributes them to the
* worker nodes as a parallel collection, and runs them.
*/
private def startReceivers() {
val receivers = receiverInputStreams.map(nis => {
val rcvr = nis.getReceiver()
rcvr.setReceiverId(nis.id)
rcvr
})
// Right now, we only honor preferences if all receivers have them
val hasLocationPreferences = receivers.map(_.preferredLocation.isDefined).reduce(_ && _)
// Create the parallel collection of receivers to distributed them on the worker nodes
val tempRDD =
if (hasLocationPreferences) {
val receiversWithPreferences = receivers.map(r => (r, Seq(r.preferredLocation.get)))
ssc.sc.makeRDD[Receiver[_]](receiversWithPreferences)
} else {
ssc.sc.makeRDD(receivers, receivers.size)
}
val checkpointDirOption = Option(ssc.checkpointDir)
val serializableHadoopConf = new SerializableWritable(ssc.sparkContext.hadoopConfiguration)
// Function to start the receiver on the worker node
val startReceiver = (iterator: Iterator[Receiver[_]]) => {
if (!iterator.hasNext) {
throw new SparkException(
"Could not start receiver as object not found.")
}
val receiver = iterator.next()
val supervisor = new ReceiverSupervisorImpl(
receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption)
supervisor.start()
supervisor.awaitTermination()
}
// Run the dummy Spark job to ensure that all slaves have registered.
// This avoids all the receivers to be scheduled on the same node.
if (!ssc.sparkContext.isLocal) {
ssc.sparkContext.makeRDD(1 to 50, 50).map(x => (x, 1)).reduceByKey(_ + _, 20).collect()
}
// Distribute the receivers and start them
logInfo("Starting " + receivers.length + " receivers")
running = true
ssc.sparkContext.runJob(tempRDD, ssc.sparkContext.clean(startReceiver))
running = false
logInfo("All of the receivers have been terminated")
}
1)獲取所有的receiver(接收器)
2)將receivers建立tempRDD,並分區並行化,每個分區一個元素,元素爲receiver
3)創建方法startReceiver,該方法以分區元素(receiver)的迭代器作爲參數,之後將該方法參數傳入runJob中,針對每個分區,依次將每個分區中的元素(receiver)應用到該方法上
4)runJob的startReceiver方法。每個分區只有一個receiver,因此在該方法內構造一個ReceiverSupervisorImpl,在它內部真正的接收數據並保存。發送RegisterReceiver消息給dirver驅動。
重點介紹一下supervisor.start方法內部的邏輯實現:主要分爲以下兩個方法
/** Start the supervisor */
def start() {
onStart()
startReceiver()
}
(1)onStart方法:
override protected def onStart() {
blockGenerator.start()
}
- 數據真正接收到是發生在SocketReceiver.receive函數中,將接收到的數據放入到BlockGenerator.currentBuffer
- 在BlockGenerator中有一個重複定時器,處理函數爲updateCurrentBuffer, updateCurrentBuffer將當前buffer中的數據封裝爲一個新的Block,放入到blocksForPush隊列中
- 同樣是在BlockGenerator中有一個BlockPushingThread,其職責就是不停的將blocksForPushing隊列中的成員通過pushArrayBuffer函數傳遞給blockmanager,讓BlockManager將數據存儲到MemoryStore中
- pushArrayBuffer還會將已經由BlockManager存儲的Block的id號傳遞給ReceiverTracker,ReceiverTracker會將存儲的blockId放到對應StreamId的隊列中
(2)startReceiver方法:
/** Start receiver */
def startReceiver(): Unit = synchronized {
try {
logInfo("Starting receiver")
receiver.onStart()
logInfo("Called receiver onStart")
onReceiverStart()
receiverState = Started
} catch {
case t: Throwable =>
stop("Error starting receiver " + streamId, Some(t))
}
}
1)receiver.onStart方法建立socket連接,逐行讀取數據,最終將數據插入BlockGenerator的currentBuffer中。一旦插入了數據,就觸發了上面重複定時器。按設置的block生產間隔(默認200ms),生成block,將block插入blocksForPushing隊列中。然後,blockPushingThread線程逐個取出傳遞給blockmanager保存起來,同時通過AddBlock消息通知ReceiverTracker已經將哪些block存儲到了blockmanager中。
2)onReceiverStart方法
向receiverTracker(位於driver端)發送RegisterReceiver消息,報告自己(receiver)啓動了,目的是可以在UI中反饋出來。ReceiverTracker將每一個stream接收到但還沒有進行處理的block放入到receiverInfo,其爲一Hashmap. 在後面的generateJobs中會從receiverInfo提取數據以生成相應的RDD。
4、調用jobGenerator的start方法。
(1)首先構建JobGeneratorEvent類型事件的EventLoop,包含GenerateJobs,ClearMetadata,DoCheckpoint,ClearCheckpointData四個事件。並運行起來。
(2)調用startFirstTime啓動generator
/** Starts the generator for the first time */
private def startFirstTime() {
val startTime = new Time(timer.getStartTime())
graph.start(startTime - graph.batchDuration)
timer.start(startTime.milliseconds)
logInfo("Started JobGenerator at " + startTime)
}
timer.getStartTime計算出來下一個週期的到期時間,計算公式:(math.floor(clock.currentTime.toDouble / period) + 1).toLong * period,以當前的時間/除以間隔時間,再用math.floor求出它的上一個整數(即上一個週期的到期時間點),加上1,再乘以週期就等於下一個週期的到期時間。
(3) 啓動DStreamGraph,調用graph.start方法,啓動時間比startTime早一個時間間隔,爲什麼呢?求告知!!!
def start(time: Time) {
this.synchronized {
if (zeroTime != null) {
throw new Exception("DStream graph computation already started")
}
zeroTime = time
startTime = time
outputStreams.foreach(_.initialize(zeroTime))//設置outputstream的zeroTime爲time值
outputStreams.foreach(_.remember(rememberDuration))//如果設置過rememberDuration,則設置outputstream的rememberDuration爲該值
outputStreams.foreach(_.validateAtStart)
inputStreams.par.foreach(_.start())
}
}
(4) 調用timer.start方法,參數爲startTime
這裏的timer爲:
private val timer = new RecurringTimer(clock, ssc.graph.batchDuration.milliseconds,
longTime => eventLoop.post(GenerateJobs(new Time(longTime))), "JobGenerator")
內部包含一個定時器,每隔batchDuration的時間間隔就向eventLoop發送一個GenerateJobs消息,參數longTime爲下一個間隔到來時的時間點 /**
* Start at the given start time.
*/
def start(startTime: Long): Long = synchronized {
nextTime = startTime
thread.start()
logInfo("Started timer for " + name + " at time " + nextTime)
nextTime
}
通過內部的thread.start方法,觸發timer內部的定時器運行。從而按時間間隔產生job。
5、GenerateJobs/ClearMetadata 事件處理介紹
JobGeneratorEvent類型事件的EventLoop,包含GenerateJobs,ClearMetadata,DoCheckpoint,ClearCheckpointData四個事件
GenerateJobs:
/** Generate jobs and perform checkpoint for the given `time`. */
private def generateJobs(time: Time) {
// Set the SparkEnv in this thread, so that job generation code can access the environment
// Example: BlockRDDs are created in this thread, and it needs to access BlockManager
// Update: This is probably redundant after threadlocal stuff in SparkEnv has been removed.
SparkEnv.set(ssc.env)
Try {
jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch
graph.generateJobs(time) // generate jobs using allocated block
} match {
case Success(jobs) =>
val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time)
val streamIdToNumRecords = streamIdToInputInfos.mapValues(_.numRecords)
jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToNumRecords))
case Failure(e) =>
jobScheduler.reportError("Error generating jobs for time " + time, e)
}
eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false))
}
(1)allocateBlocksToBatch:首先根據time的值獲取之前receiver接收到的並且通過AddBlock消息傳遞給receiverTracker的block元數據信息。並且將time對應的blocks信息映射保存起來。
那麼,這裏的time是怎麼和每200ms間隔產生blocks對應起來的呢?答案就是time時間到後,將所有接收到但還未分配的blocks都劃爲這個time間隔內的。
(2)generateJobs:根據一個outputStream生成一個job,最終每個outputStream都調用如下的方法,見下面代碼註釋
注:這裏的generateJob實際調用的是根據outputStream重載的方法,比如print的方法是輸出一些值:
override def generateJob(time: Time): Option[Job] = {
parent.getOrCompute(time) match {<span style="font-family: Tahoma, 'Microsoft Yahei', Simsun;">//這裏實際是手動調用了ReceiverInputDStream的compute方法,產生一個RDD,確切的說是BlockRDD。見下面介紹</span>
case Some(rdd) =>
val jobFunc = () => createRDDWithLocalProperties(time) {
ssc.sparkContext.setCallSite(creationSite)
foreachFunc(rdd, time)<span style="font-family: Tahoma, 'Microsoft Yahei', Simsun;">//這裏將上面的到的BlockRDD和一個在每個分區上執行的方法封裝成一個jobFunc,在foreachFunc方法內部通過runJob提交任務獲得輸出的值,從而輸出</span>
}
Some(new Job(time, jobFunc))<span style="font-family: Tahoma, 'Microsoft Yahei', Simsun;">//</span><span style="font-family: Tahoma, 'Microsoft Yahei', Simsun;">將time和jobFunc再次封裝成Job,返回,等待被調度執行</span>
case None => None
}
}
這裏需要解釋一下ReceiverInputDStream的compute方法
1)首先根據time值將之前映射的blocks元數據信息獲取出來
2) 獲取這些blocks的blockId,blockId其實就是streamId+唯一值,這個唯一值可以保證在一個流裏面產生的唯一的Id
3)將這個batchTime時間內的blocks元信息彙總起來,保存到inputInfoTracker中
4)將sparkContext和blockIds封裝成BlockRDD返回
至此,Job已經產生了。如果Job產生成功,就走Case Success(Jobs) =>分支
jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToNumRecords))
主要是根據time,jobs,以及streamId和每個streamId的記錄數的映射封裝成JobSet,調用submitJobSet
def submitJobSet(jobSet: JobSet) {
if (jobSet.jobs.isEmpty) {
logInfo("No jobs added for time " + jobSet.time)
} else {
listenerBus.post(StreamingListenerBatchSubmitted(jobSet.toBatchInfo))
jobSets.put(jobSet.time, jobSet)
jobSet.jobs.foreach(job => jobExecutor.execute(new JobHandler(job)))
logInfo("Added jobs for time " + jobSet.time)
}
}
可以看到,將jobSet保存到jobSets這樣一個映射結構當中,然後將每個job通過JobHandler封裝之後,通過一個線程調用運行起來。這個線程就是通過“spark.streaming.concurrentJobs”參數設置的一個線程池,默認是1。
接着看JobHandler被線程處理時的邏輯,見代碼註釋:
private class JobHandler(job: Job) extends Runnable with Logging {
def run() {
ssc.sc.setLocalProperty(JobScheduler.BATCH_TIME_PROPERTY_KEY, job.time.milliseconds.toString)
ssc.sc.setLocalProperty(JobScheduler.OUTPUT_OP_ID_PROPERTY_KEY, job.outputOpId.toString)
try {
eventLoop.post(JobStarted(job))//這裏主要是設置這個job所處的jobset的processingStartTime爲當時時刻
// Disable checks for existing output directories in jobs launched by the streaming
// scheduler, since we may need to write output to an existing directory during checkpoint
// recovery; see SPARK-4835 for more details.
PairRDDFunctions.disableOutputSpecValidation.withValue(true) {
job.run()//這裏的run方法就是調用了封裝Job時的第二個參數,一個方法參數,就是上面的jobFunc
}
eventLoop.post(JobCompleted(job))//如果這個job所處的jobset都完成了,就設置processingEndTime,並向時間循環器發送ClearMetadata消息,後續講解
} finally {
ssc.sc.setLocalProperty(JobScheduler.BATCH_TIME_PROPERTY_KEY, null)
ssc.sc.setLocalProperty(JobScheduler.OUTPUT_OP_ID_PROPERTY_KEY, null)
}
}
}
ClearMetadata:
當一個jobset完成後,就會處理ClearMetadata消息
1、根據time的時間,過濾出在time之前的rdd,如果設置了rememberDuration,則過濾出小於(time-rememberDuration)的rdd
2、將過濾出的rdd調用unpersist
3、刪除在blockManager中的block
4、根據dependencies關係鏈依次刪除,從outputStream開始,根據鏈路依次進行
5、刪除其它內存紀錄信息
至此,關於spark stream最重要的部分,調度及運行就分析結束了!