Spark學習筆記2. HA 環境搭建

Spark HA:兩種方式 參考講義
(1)基於文件目錄: 開發測試(單機環境)
    (*)將Worker和Application的狀態信息寫入一個目錄
    (*)如果出現崩潰,從該目錄進行恢復
    (*)在bigdata11上配置
        (1) 創建一個恢復目錄 mkdir /root/training/spark-2.1.0-bin-hadoop2.7/recovery
        (2) 修改配置文件  spark-env.sh
            export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=FILESYSTEM -Dspark.deploy.recoveryDirectory=/root/training/spark-2.1.0-bin-hadoop2.7/recovery"

測試:

  1.啓動spark  sbin/start-all.sh

[root@BigData11 spark-2.1.0-bin-hadoop2.7]# sbin/start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /root/training/spark-2.1.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-BigData11.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /root/training/spark-2.1.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-BigData11.out

2.執行spark   bin/spark-shell --master spark://BigData11:7077

[root@BigData11 spark-2.1.0-bin-hadoop2.7]# bin/spark-shell --master spark://BigData11:7077
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/10/11 20:49:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/10/11 20:49:36 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
18/10/11 20:49:36 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
18/10/11 20:49:37 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.163.11:4040
Spark context available as 'sc' (master = spark://BigData11:7077, app id = app-20181011204923-0000).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_144)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

3.關閉master sbin/stop-master.sh

[root@BigData11 spark-2.1.0-bin-hadoop2.7]# sbin/stop-master.sh 
stopping org.apache.spark.deploy.master.Master

觀察spark日誌 

4.啓動master sbin/start-master.sh

[root@BigData11 spark-2.1.0-bin-hadoop2.7]# sbin/start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /root/training/spark-2.1.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-BigData11.out
[root@BigData11 spark-2.1.0-bin-hadoop2.7]# 

觀察spark:


        
(2)基於ZooKeeper:(生產環境)
     前提:搭建ZooKeeper,並啓動 zkServer.sh start (查看狀態 zkServer.sh status)
     Master節點:  BigData12 BigData13
     Worker從節點:BigData13 BigData14        

    修改:spark-env.sh

export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=BigData12:2181,BigData13:2181,BigData14:2181 -Dspark.deploy.zookeeper.dir=/spark"

並註釋掉 HOST 和PORT


    
需要在bigdata113上,單點啓動一個master
      sbin/start-master.sh

查看zookeeper 信息

可以嘗試kill 掉12/13 的master 信息,重新刷新網頁查看spark信息

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章