Spark学习笔记2. HA 环境搭建

Spark HA:两种方式 参考讲义
(1)基于文件目录: 开发测试(单机环境)
    (*)将Worker和Application的状态信息写入一个目录
    (*)如果出现崩溃,从该目录进行恢复
    (*)在bigdata11上配置
        (1) 创建一个恢复目录 mkdir /root/training/spark-2.1.0-bin-hadoop2.7/recovery
        (2) 修改配置文件  spark-env.sh
            export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=FILESYSTEM -Dspark.deploy.recoveryDirectory=/root/training/spark-2.1.0-bin-hadoop2.7/recovery"

测试:

  1.启动spark  sbin/start-all.sh

[root@BigData11 spark-2.1.0-bin-hadoop2.7]# sbin/start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /root/training/spark-2.1.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-BigData11.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /root/training/spark-2.1.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-BigData11.out

2.执行spark   bin/spark-shell --master spark://BigData11:7077

[root@BigData11 spark-2.1.0-bin-hadoop2.7]# bin/spark-shell --master spark://BigData11:7077
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/10/11 20:49:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/10/11 20:49:36 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
18/10/11 20:49:36 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
18/10/11 20:49:37 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.163.11:4040
Spark context available as 'sc' (master = spark://BigData11:7077, app id = app-20181011204923-0000).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_144)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

3.关闭master sbin/stop-master.sh

[root@BigData11 spark-2.1.0-bin-hadoop2.7]# sbin/stop-master.sh 
stopping org.apache.spark.deploy.master.Master

观察spark日志 

4.启动master sbin/start-master.sh

[root@BigData11 spark-2.1.0-bin-hadoop2.7]# sbin/start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /root/training/spark-2.1.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-BigData11.out
[root@BigData11 spark-2.1.0-bin-hadoop2.7]# 

观察spark:


        
(2)基于ZooKeeper:(生产环境)
     前提:搭建ZooKeeper,并启动 zkServer.sh start (查看状态 zkServer.sh status)
     Master节点:  BigData12 BigData13
     Worker从节点:BigData13 BigData14        

    修改:spark-env.sh

export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=BigData12:2181,BigData13:2181,BigData14:2181 -Dspark.deploy.zookeeper.dir=/spark"

并注释掉 HOST 和PORT


    
需要在bigdata113上,单点启动一个master
      sbin/start-master.sh

查看zookeeper 信息

可以尝试kill 掉12/13 的master 信息,重新刷新网页查看spark信息

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章