spark-core_10: org.apache.spark.deploy.master.Master源碼解析2--Master這個RpcEndPoint是如何初始化Master

承接上文

/**
  * RpcEndpoint的生命週期又是: onStart -> receive(receiveAndReply)* -> onStop
  * 這個MasterRpcEndpoint是線程安全的
  */
private[deploy] class Master(
    override val rpcEnv: RpcEnv,
    address: RpcAddress,
    webUiPort: Int,
    val securityMgr: SecurityManager,
    val conf: SparkConf)
  extends ThreadSafeRpcEndpoint with Logging with LeaderElectable {
  //只有一個線程的後臺守護調度線程池,這個名稱就是線程的名稱,看方法名管理消息的線程池,從下面的源碼看它是負責Worker是否死亡,及選舉用
  private val forwardMessageThread =
    ThreadUtils.newDaemonSingleThreadScheduledExecutor("master-forward-message-thread")
  //這線程池只有一個線程,和上面的區別是沒有調度。它是重構UI的線程(這個兩個線程池的源碼實現也很簡單,使用google的ThreadFactory來包裝的線程池)
  private val rebuildUIThread =
    ThreadUtils.newDaemonSingleThreadExecutor("master-rebuild-ui-thread")
  private val rebuildUIContext = ExecutionContext.fromExecutor(rebuildUIThread)

  //返回一個適當的(子類)配置。 創建配置可以初始化一些Hadoop子系統。//實現也很簡單
//將SparkConf對象中含有spark.hadoop.foo=bar的key取出來,然後將spark.hadoop.去掉,把剩下的內容爲hadoopConf的foo=bar
  private val hadoopConf = SparkHadoopUtil.get.newConfiguration(conf)

  private def createDateFormat = new SimpleDateFormat("yyyyMMddHHmmss") // For application IDs

  private val WORKER_TIMEOUT_MS = conf.getLong("spark.worker.timeout", 60) * 1000
  private val RETAINED_APPLICATIONS = conf.getInt("spark.deploy.retainedApplications", 200)
  private val RETAINED_DRIVERS = conf.getInt("spark.deploy.retainedDrivers", 200)
  private val REAPER_ITERATIONS = conf.getInt("spark.dead.worker.persistence", 15)
  /**
    * export SPARK_DAEMON_JAVA_OPTS="-Dsun.io.serialization.extendedDebugInfo=true
    * -Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=luyl153:2181,luyl154:2181,luyl155:2181
    * -Dspark.deploy.zookeeper.dir=/spark"
    * 這些SPARK_DAEMON_JAVA_OPTS是在main方法:new SparkConf的時候初始化的從環境變量中取的,當在ha的模式下:RECOVERY_MODE對應的就是ZOOKEEPER
    */
  private val RECOVERY_MODE = conf.get("spark.deploy.recoveryMode", "NONE")

  val workers = new HashSet[WorkerInfo]
  val idToApp = new HashMap[String, ApplicationInfo]
  val waitingApps = new ArrayBuffer[ApplicationInfo]
  val apps = new HashSet[ApplicationInfo]

  private val idToWorker = new HashMap[String, WorkerInfo]
  private val addressToWorker = new HashMap[RpcAddress, WorkerInfo]

  private val endpointToApp = new HashMap[RpcEndpointRef, ApplicationInfo]
  private val addressToApp = new HashMap[RpcAddress, ApplicationInfo]
  private val completedApps = new ArrayBuffer[ApplicationInfo]
  private var nextAppNumber = 0
  // Using ConcurrentHashMap so that master-rebuild-ui-thread can add a UI after asyncRebuildUI
  private val appIdToUI = new ConcurrentHashMap[String, SparkUI]

  private val drivers = new HashSet[DriverInfo] //DriverInfo針對StandaloneRestServer,即cluster模式
  private val completedDrivers = new ArrayBuffer[DriverInfo]
  // Drivers currently spooled for scheduling
  private val waitingDrivers = new ArrayBuffer[DriverInfo]
  private var nextDriverNumber = 0

  Utils.checkHost(address.host, "Expected hostname")

/** 實例的名稱可以是master, worker, executor, driver, applications,將MetricsSystem初始化出來,然的將默認的指標加到
  * MetricsConfig.propertyCategories的最終的值是
  * (applications,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet, sink.servlet.path=/metrics/applications/json})
  * (master,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet, sink.servlet.path=/metrics/master/json})
  * (*,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet, sink.servlet.path=/metrics/json})
  */


 
privateval masterMetricsSystem = MetricsSystem.createMetricsSystem("master", conf, securityMgr)
 
privateval applicationMetricsSystem = MetricsSystem.createMetricsSystem("applications", conf, securityMgr)

1,分析一下MetricsSystem這個指標系統類在代碼到底做了什麼?

==》先實例化

def createMetricsSystem(
   
instance: String, conf: SparkConf, securityMgr:SecurityManager): MetricsSystem = {
 
new MetricsSystem(instance, conf, securityMgr)
}

2,看一下這個MetricsSystem相關注釋,自己翻譯了一下,就是將spark內置MasterSource,WorkerSource,還有worker的,executor的等source組件狀態,sink到指定的servlet中

* 實例指定了指標系統的角色,在spark中像master,worker, executor, client driver,這些角色會被MetricsSystem進行監控
  * 在這個spark中 master, worker, executor, driver, applications角色,實例已經實現過
  *
  * source源指定收集的指標數據。在MetricsSystem系統中,有兩個類型的source
  * 1,spark內部source,如MasterSource, WorkerSource等,它們會收集spark組件的狀態,在MetricsSystem創建之後,會將它們實例加到MetricsSystem中
  * 2,共同的source ,如jvmSource,它會收集低級別的狀態,可以配製可以通過加載反射或configuration
  *
  * sink指定指標數據輸出的位置, 多個接收器可以共存,並將度量指標提供給所有這些接收器
  * 指標配製的格式如下:
  * [instance].[sink|source].[name].[options] = xxxx
  * instance可以是“master”,“worker”,“executor”,“driver”,“applications”,這意味着只有指定的實例才具有當前定義的屬性。
    通配符“*”可用於替換實例名稱,這意味着所有實例都具有此屬性。
     第二個字段,只能是sink或source
  * name:指定了接收器或源的名稱,可以自定義
  * options:指定了source或接收器的屬性
 */

private[spark] class MetricsSystemprivate (
   
val instance:String,
   
conf: SparkConf,
   
securityMgr: SecurityManager)
 
extends Logging {

 
private[this] val metricsConfig = new MetricsConfig(conf)

 
private val sinks = new mutable.ArrayBuffer[Sink]
 
private val sources = new mutable.ArrayBuffer[Source]
 
private val registry = new MetricRegistry()

 
private var running: Boolean = false

 
// Treat MetricsServlet as a special sink as it should beexposed to add handlers to web ui

//將MetricsServlet視爲特殊的接收器,因爲它應該暴露給web UI中的處理程序
  private var metricsServlet: Option[MetricsServlet] = None

 
/**
  
* Get any UI handlers used by thismetrics system; can only be called after start().
   */
 
def getServletHandlers: Array[ServletContextHandler] = {
   
require(running, "Canonly call getServletHandlers on a running MetricsSystem")
   
metricsServlet.map(_.getHandlers(conf)).getOrElse(Array())
 
}

  metricsConfig.initialize()

3,先來看一下new MetricsConfig(conf).initialize(),從下面的源碼看出,最終得到一個propertyCategories: HashMap[String, Properties]就是將master,appplications和所有指標,做爲key,value是properties對應值sink接收器的servletclass類路徑及接收器的servlet的url

private[spark] class MetricsConfig(conf:SparkConf) extends Logging {

 
private val DEFAULT_PREFIX = "*"
 
private val INSTANCE_REGEX = "^(\\*|[a-zA-Z]+)\\.(.+)".r
 
 private val DEFAULT_METRICS_CONF_FILENAME = "metrics.properties"
 
/** MetricsConfig.properties的值是
    * ("*.sink.servlet.class","org.apache.spark.metrics.sink.MetricsServlet")
    * ("*.sink.servlet.path","/metrics/json")
    * ("master.sink.servlet.path","/metrics/master/json")
    *("applications.sink.servlet.path","/metrics/applications/json")
    */

 
private[metrics]val properties = new Properties()
 
/**MetricsConfig.propertyCategories的值是
*(applications,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet,sink.servlet.path=/metrics/applications/json})
    *(master,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet,sink.servlet.path=/metrics/master/json})
    *(*,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet,sink.servlet.path=/metrics/json})
    */

 
private[metrics]var propertyCategories: mutable.HashMap[String, Properties] = null
 
private def
setDefaultProperties(prop: Properties) {
   
prop.setProperty("*.sink.servlet.class", "org.apache.spark.metrics.sink.MetricsServlet")
   
prop.setProperty("*.sink.servlet.path", "/metrics/json")
   
prop.setProperty("master.sink.servlet.path", "/metrics/master/json")
   
prop.setProperty("applications.sink.servlet.path", "/metrics/applications/json")
 
}

  def initialize() {
    // Add default properties in case there's no propertiesfile
    //將默認的屬性及值加到Properties中

    setDefaultProperties(properties)
   
//默認spark.metrics.conf沒有值,什麼也沒有加載上來
    loadPropertiesFromFile(conf.getOption("spark.metrics.conf"))

   
// Also look for the properties in provided Sparkconfiguration
   
val prefix= "spark.metrics.conf."
   
conf.getAll.foreach {
     
case (k, v) if k.startsWith(prefix) =>
       
properties.setProperty(k.substring(prefix.length()), v)
     
case _=>
   
}
    /** 當初始時 subProperties的返回值是HashMap[String,Properties]對應如下:
      *(applications,{sink.servlet.path=/metrics/applications/json})
      *(master,{sink.servlet.path=/metrics/master/json})
      *(*,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet,sink.servlet.path=/metrics/json})
      */

    propertyCategories
= subProperties(properties, INSTANCE_REGEX)
   
//DEFAULT_PREFIX: "*"
   
if (propertyCategories.contains(DEFAULT_PREFIX)) {
     
//就是(*,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet,sink.servlet.path=/metrics/json})
     
val defaultProperty= propertyCategories(DEFAULT_PREFIX).asScala
     
for((inst, prop) <- propertyCategoriesif (inst != DEFAULT_PREFIX);
         
(k, v) <- defaultProperty if (prop.get(k) == null)) {
        prop.put(k, v)
      }
      /**上面幾行代碼就是爲了修改propertyCategories爲這兩個master,applications對應的properties增加
        *sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet
        * 所以propertyCategories最終結果如下:
*(applications,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet,sink.servlet.path=/metrics/applications/json})
        *(master,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet,sink.servlet.path=/metrics/master/json})
        *(*,{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet,sink.servlet.path=/metrics/json})
        */

   
}
 
}

  def subProperties(prop: Properties, regex:Regex): mutable.HashMap[String, Properties] = {
   
val subProperties= new mutable.HashMap[String, Properties]
   
prop.asScala.foreach { kv =>
      if (regex.findPrefixOf(kv._1.toString).isDefined){
       
val regex(prefix, suffix) = kv._1.toString
       subProperties.getOrElseUpdate(prefix, new Properties).setProperty(suffix, kv._2.toString)
     
}
    }
    subProperties

  }

  //如果是master,則返回

{sink.servlet.class=org.apache.spark.metrics.sink.MetricsServlet,sink.servlet.path=/metrics/master/json}

  def getInstance(inst: String):Properties = {
   
//propertyCategories=HashMap[String, Properties],返回值就是通過subProperties
    propertyCategories.get(inst) match {
     
case Some(s) => s
     
case None=> propertyCategories.getOrElse(DEFAULT_PREFIX, new Properties)
   
}
  }

  /**
   * Loads configuration from a configfile. If no config file is provided, try to get file
   * in class path.
    * 默認spark.metrics.conf沒有值即path是None,而metrics.properties對應的當前路徑下的classPath也沒有這個文件
   */

 
private[this] def loadPropertiesFromFile(path: Option[String]): Unit = {
   
var is:InputStream = null
   
try
{
     
is = path match {
       
case Some(f) => new FileInputStream(f)
       
case None=> Utils.getSparkClassLoader.getResourceAsStream(DEFAULT_METRICS_CONF_FILENAME)
     
}
      if (is!= null) {
       
properties.load(is)
     
}
    } catch {
     
case e: Exception =>
       
val file= path.getOrElse(DEFAULT_METRICS_CONF_FILENAME)
       
logError(s"Error loading configuration file $file", e)
   
} finally {
     
if (is!= null) {
       
is.close()
      }
    }
  }
}

===>再從MetricsSystem回到Master中

 

//spark內部source,如MasterSource, WorkerSource等,它們會收集spark組件的狀態,在MetricsSystem創建之後,會將它們實例加到MetricsSystem中
  private val masterSource = new MasterSource(this)

  // After onStart, webUi will be set
  private var webUi: MasterWebUI = null

  private val masterPublicAddress = {
    //沒有這個值,RpcAddress.host的值就是當前節點:luyl152:7077
    val envVar = conf.getenv("SPARK_PUBLIC_DNS")
    if (envVar != null) envVar else address.host
  }
//masterUrl的值spark://luyl152:7077
  private val masterUrl = address.toSparkURL
  //下面onstart()初始化變成 masterWebUiUrl 值 http://luyl152:8080
  private var masterWebUiUrl: String = _

  private var state = RecoveryState.STANDBY

  private var persistenceEngine: PersistenceEngine = _

  private var leaderElectionAgent: LeaderElectionAgent = _

  private var recoveryCompletionTask: ScheduledFuture[_] = _

  private var checkForWorkerTimeOutTask: ScheduledFuture[_] = _

  // As a temporary workaround before better ways of configuring memory, we allow users to set  
a flag that will perform round-robin scheduling across the nodes (spreading out each app  among all the nodes) 
instead of trying to consolidate each app onto a small # of nodes.
  //作爲更好的配置內存方式之前的一個臨時解決方法,我們允許用戶設置一個標誌,
  // 在整個節點上執行循環調度(將所有節點中的每個應用程序分開),而不是試圖將每個應用程序整合到一小塊節點。
  private val spreadOutApps = conf.getBoolean("spark.deploy.spreadOut", true)

  // Default maxCores for applications that don't specify it (i.e. pass Int.MaxValue)
//core的數量默認是取Int的最大值,如果沒有core報錯
  private val defaultCores = conf.getInt("spark.deploy.defaultCores", Int.MaxValue)
  if (defaultCores < 1) {
    throw new SparkException("spark.deploy.defaultCores must be positive")
  }

  // Alternative application submission gateway that is stable across Spark versions
  //spark.master.rest.enabled=true,啓動rest Server,這個rest Server主要是給Cluster模式用的
  private val restServerEnabled = conf.getBoolean("spark.master.rest.enabled", true)
  private var restServer: Option[StandaloneRestServer] = None
  private var restServerBoundPort: Option[Int] = None
  /**  RpcEndpoint的生命週期是: 如果有onStart會先執行 -> receive、receiveAndReply、等消息接收回復的方法-> onStop
    * 構建webui和啓動rest server,定期檢查Worker是否超時,進行Master HA相關的操作
    */

  override def onStart(): Unit = {
    logInfo("Starting Spark master at " + masterUrl)
    logInfo(s"Running Spark version ${org.apache.spark.SPARK_VERSION}")
    webUi = new MasterWebUI(this, webUiPort)

下面分析一下MasterWebUI(MasterRpcEndPoint,8080)如何初始化web頁面


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章