本篇主要記錄看源碼的一個執行過程路徑,簡要記錄,方便以後理清思路,或者給正在看源碼的提供一個思路。還是對着源碼看看相信會有很大的收穫。
入口
spark 提交的任務入口都是SparkSubmit,從SparkSubmit.scala main class開始
main
==>
submit(appArgs)
==>
prepareSubmitEnvironment
準備提交環境變量等,主要是判斷是那種語言,java python R ,判斷master是yarn 還是 mesos, 判斷那種執行模式 client 還是cluster ,會解析得到不同的mainCalss. 這裏on yarn Cluster的 main class是org.apache.spark.deploy.yarn.Client 。 on yarn client模式 的main class就是用戶指定的 --class 參數
duRunMain -->
runMain //得到main class , main method 然後調用反射mainMethod.invoke 開始執行
yarn-cluster模式
從org.apache.spark.deploy.yarn.Client類的main class開始執行
main ==>
run ===>
submitApplication
val containerContext = createContainerLaunchContext(newAppResponse)//構建啓動APPmaster的 command
val appContext = createApplicationSubmissionContext(newApp, containerContext)
// Finally, submit and monitor the application
logInfo(s"Submitting application $appId to ResourceManager")
yarnClient.submitApplication(appContext)//調用yarnclient提交任務
現在就到了AppMaster啓動了
ApplicationMaster.scala main ===>
run () ===>
if (isClusterMode) {//區分cluster和clinet模式
runDriver(securityMgr)//啓動Driver
} else {
runExecutorLauncher(securityMgr)
}
==>runDriver
===>userClassThread = startUserApplication()//啓動用戶設置class
spark submit斷開最後然後斷開,提交完成
yarn-client模式
入口調用用戶class的類
在sparkContext初始化的過程中,會初始化TaskScduler的實現類,
val (sched, ts) = SparkContext.createTaskScheduler(this, master, deployMode)
backend在yarn client模式下爲YarnClientSchedulerBackend
taskScheduler在yarn clinet模式下爲 YarnScheduler
taskScheduler.start時會啓動backend.start
在YarnClientSchedulerBackend的start方法中,在start中會new一個client對象,調用提交submitApplication方法
client = new Client(args, conf)
bindToYarn(client.submitApplication(), None)
===> submitApplication
===> yarnClient.submitApplication
在Appmaster的main calss中
ApplicationMaster.scala main ===>
run () ===>
if (isClusterMode) {//區分cluster和clinet模式
runDriver(securityMgr)//啓動Driver
} else {
runExecutorLauncher(securityMgr)
}
client模式調的是runExecutorLauncher
appmaster和client建立rpc連接,等待client將driver啓動
對比
//cluster
rpcEnv = sc.env.rpcEnv
val driverRef = runAMEndpoint(
sc.getConf.get("spark.driver.host"),
sc.getConf.get("spark.driver.port"),
isClusterMode = true)//啓動driver host port
registerAM(sc.getConf, rpcEnv, driverRef, sc.ui.map(_.appUIAddress).getOrElse(""),
securityMgr)
//client
rpcEnv = RpcEnv.create("sparkYarnAM", Utils.localHostName, port, sparkConf, securityMgr,
clientMode = true)
val driverRef = waitForSparkDriver()//啓動driver localhost
addAmIpFilter()
registerAM(sparkConf, rpcEnv, driverRef, sparkConf.get("spark.driver.appUIAddress", ""),
securityMgr)//註冊AM
對比就是driver啓動的地方區別,cluster模式和appmaster在一起,client模式是localhost
main class
SparkPi.main
===>
rdd.reduce
===>
sc.runjob
===>
scheduler.runjob
===>
submitJob
===>
submitJobHandle
===>
submitStage
===>
submitMissingTask //for 循環遞歸調用
===>
taskScheduler.submitTasks(taskSet)
===>
backend.reviveOffers()//發送reviveOffers消息給CoarseGrainedExecutorBackend
===>
backend端
case ReviveOffers =>
makeOffers()
===>
launchTasks
將Task序列化發送到各個Executor執行,Task提交就完成了。