1,先是start-all.sh調用start-master.sh(查看spark-core_05:$SPARK_HOME/sbin/start-all.sh、start-master.sh腳本分析)。而start-master.sh 使用如下腳本調用spark-deamon.sh
spark-daemon.sh start org.apache.spark.deploy.master.Master 1luyl152 --port 7077 --webui-port 8080
2,而spark-deamon.sh使用如下腳本調用spark-class(查看spark-core_06:$SPARK_HOME/sbin/spark-daemon.sh腳本分析)
$SPARK_HOME/bin/spark-classorg.apache.spark.deploy.master.Master luyl152 --port 7077 --webui-port 8080
3,而spark-class會使用如下腳本去調用Master(查看spark-core_02: spark-submit、spark-class腳本分析)
java -cp spark_home/lib/spark-assembly-1.6.0-hadoop2.6.0.jarorg.apache.spark.launcher.Main org.apache.spark.deploy.master.Masterluyl152 --port 7077 --webui-port 8080
4,對於launcher.Main是如何調用的,參看(spark-core_03: org.apache.spark.launcher.Main源碼分析)
一、Master在main,可以看出初始化了SparkConf、MasterArguments,RpcEnv
/**
* 如果debugger是Master在這個master的spark-env.sh文件中添加如下參數
exportSPARK_MASTER_OPTS="-Xdebug-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005"
如果debugger是worker:則在它節點上spark-env.sh:
exportSPARK_WORKER_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005"
*/
private[deploy] object Masterextends Logging {
val SYSTEM_NAME= "sparkMaster"
val ENDPOINT_NAME= "Master"
//spark-class傳進來的參數是 --port 7077 --webui-port 8080
def main(argStrings: Array[String]) {
SignalLogger.register(log)
val conf= new SparkConf
//RpcEnv所需要的host:"本機主機名",port:7077,webUIport:8080
//MasterArguments會解析參數並
val args= new MasterArguments(argStrings, conf)
val (rpcEnv, _, _) = startRpcEnvAndEndpoint(args.host, args.port, args.webUiPort, conf)
rpcEnv.awaitTermination()
}
1,在初始化new SparkConf時會去環境變量中,以spark.*開始的配製
class SparkConf(loadDefaults: Boolean) extendsCloneable with Logging{
import SparkConf._
/** Create a SparkConf that loads defaults from systemproperties and the classpath */
def this() = this(true)
private val settings = new ConcurrentHashMap[String, String]()
if (loadDefaults){
// Load any spark.* system properties
/**會加載所有以spark.*開始的環境變,如在spark-env.sh中配製的HA配製,就會初始化到SparkConf中
* exportSPARK_DAEMON_JAVA_OPTS="-Dsun.io.serialization.extendedDebugInfo=true
*-Dspark.deploy.recoveryMode=ZOOKEEPER-Dspark.deploy.zookeeper.url=luyl153:2181,luyl154:2181,luyl155:2181
* -Dspark.deploy.zookeeper.dir=/spark"
* 當在ha的模式下:Master成員RECOVERY_MODE對應的值ZOOKEEPER
*/
for ((key, value) <- Utils.getSystemProperties if key.startsWith("spark.")) {
set(key, value)
}
}….
2,分析一下MasterArguments(argStrings,conf),這個類就是將傳進來的參數及環境中默認配製參數,對應值解析到自己的成員中,方便Master使用
/** * Command-line parser for the master. * args 成員: --port 7077 --webui-port 8080 */ private[master] class MasterArguments(args: Array[String], conf: SparkConf) { //當前的主機名 var host = Utils.localHostName() var port = 7077 var webUiPort = 8080 var propertiesFile: String = null // Check for settings in environment variables //取得腳本中export對應的環境變量值,在spark_env.sh腳本中環境變量值是SPARK_MASTER_IP if (System.getenv("SPARK_MASTER_HOST") != null) { host = System.getenv("SPARK_MASTER_HOST") } if (System.getenv("SPARK_MASTER_PORT") != null) { port = System.getenv("SPARK_MASTER_PORT").toInt } if (System.getenv("SPARK_MASTER_WEBUI_PORT") != null) { webUiPort = System.getenv("SPARK_MASTER_WEBUI_PORT").toInt } //spark-class傳進來的參數是 --port 7077 --webui-port 8080 parse(args.toList) // This mutates the SparkConf, so all accesses to it must be made after this line //可變的SparkConf,所以需要先執行,並且spark-class 在啓動master時並沒有指定--properties-file,所以propertiesFile是null /** 從給定filePath加載默認的Spark屬性。 如果未提供文件,使用默認文件/conf/spark-defaults.conf或--properties-file指定的屬性文件。
並返回使用的屬性文件的路徑。 *spark-defaults.conf文件的內容: * # apache.spark.master spark://master:7077 # apache.spark.eventLog.enabled true # apache.spark.eventLog.dir hdfs://namenode:8021/directory # apache.spark.serializer org.apache.spark.serializer.KryoSerializer # apache.spark.driver.memory 5g # apache.spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three" */ propertiesFile = Utils.loadDefaultSparkProperties(conf, propertiesFile) if (conf.contains("spark.master.ui.port")) { webUiPort = conf.get("spark.master.ui.port").toInt } //spark-class傳進來的參數是 --port 7077 --webui-port 8080 private def parse(args: List[String]): Unit = args match { case ("--ip" | "-i") :: value :: tail => Utils.checkHost(value, "ip no longer supported, please use hostname " + value) host = value parse(tail) case ("--host" | "-h") :: value :: tail => Utils.checkHost(value, "Please use hostname " + value) host = value parse(tail) case ("--port" | "-p") :: IntParam(value) :: tail => //IntParam("7077")解析成Some(Int值) port = value parse(tail) case "--webui-port" :: IntParam(value) :: tail => //第二次匹配 webUiPort = value parse(tail) case ("--properties-file") :: value :: tail => propertiesFile = value parse(tail) case ("--help") :: tail => printUsageAndExit(0) case Nil => {} case _ => printUsageAndExit(1) }
===》很好奇,看一下Utils.loadDefaultSparkProperties(conf,propertiesFile)做了什麼事
def loadDefaultSparkProperties(conf: SparkConf, filePath:String = null): String = {
//因爲filePath沒有值,所以加載getDefaultPropertiesFile(),從環境變量加載對應的值$SPARK_CONF_DIR,即spark_home/conf目錄,然後使用spark-defaults.conf,做爲spark的屬性文件
val path= Option(filePath).getOrElse(getDefaultPropertiesFile())
Option(path).foreach { confFile=>
//只有以spark.開始配製。從spark-defaults.conf配製文件,可以先把spark.serializer打開,提高系列化的效率
getPropertiesFromFile(confFile).filter { case (k, v) =>
k.startsWith("spark.")
}.foreach { case (k, v) =>
conf.setIfMissing(k, v)
//此處也更新了環境變量中的相關屬性
sys.props.getOrElseUpdate(k, v)
}
}
path
}
===》就是加載Properties文件使用的
/** Loadproperties present in the given file. */
def getPropertiesFromFile(filename: String):Map[String, String] = {
val file= new File(filename)
require(file.exists(), s"Properties file $file does not exist")
require(file.isFile(), s"Properties file $file is not a normal file")
val inReader= new InputStreamReader(new FileInputStream(file), "UTF-8")
try {
val properties= new Properties()
properties.load(inReader)
//加載的屬性文件不一定非得是key=value格式, key後面空格或多個空格或冒號或tab都可以形成kv的properties.當前spark-defaults.conf就按多個空格分開的
//java集合變成Scala需要導入importscala.collection.JavaConverters._相關隱式轉換
properties.stringPropertyNames().asScala.map(
k => (k, properties.getProperty(k).trim)).toMap
} catch {
case e:IOException =>
throw new SparkException(s"Failedwhen loading Spark properties from $filename", e)
} finally {
inReader.close()
}
}
二、加載SparkConf, MasterArguments完成了,開始啓動Master的RpcEnv(後續章節分析RpcEnv、RpcEndpoint相關源碼)
/**
* Start the Master and return a three tuple of:
* (1) The Master RpcEnv
* (2) The web UI bound port
* (3) The REST server bound port, if any
* 將master註冊到RpcEnv中,並向masterEndpoint發送case object BoundPortsRequest,並取得receiveAndReplyl
回覆的元組(RpcEnv, Int, Option[Int]) */ def startRpcEnvAndEndpoint( host: String, //luyl152,即master的節點 port: Int, //7077 webUiPort: Int,//8080 conf: SparkConf): (RpcEnv, Int, Option[Int]) = { //SecurityManager主要對權限、賬號進行設置,也通過它來設置是jetty否啓ssl協議(即https) // val connector = securityManager.fileServerSSLOptions.createJettySslContextFactory() //.map(new SslSocketConnector(_)).getOrElse(new SocketConnector) val securityMgr = new SecurityManager(conf) //SYSTEM_NAME=sparkMaster,luyl152, 7077,conf,使用NettyRpcEnvFactory,創建一個rpcEnv容器,這個容器的id是sparkMaster val rpcEnv = RpcEnv.create(SYSTEM_NAME, host, port, conf, securityMgr) //向rpcEnv容器注入Master的rpcEndpoint,id的值是Master val masterEndpoint = rpcEnv.setupEndpoint(ENDPOINT_NAME, new Master(rpcEnv, rpcEnv.address, webUiPort, securityMgr, conf)) //給master發送BoundPortsRequest,因爲要回覆信息會進入receiveAndReply去處理 //BoundPortsResponsed成員: rpcEndpointPort是7077, webUi的port是8080,restPort是6066 val portsResponse = masterEndpoint.askWithRetry[BoundPortsResponse](BoundPortsRequest) (rpcEnv, portsResponse.webUIPort, portsResponse.restPort) }下面分析一下Master這個RpcEndPoint是如何初始化Master的