Spark+zookeeper搭建高可用集羣學習筆記

Master結點存在單點故障,所以要藉助zookeeper,至少啓動兩臺Master結點來實現高可用,配置方案比較簡單

先停止所有Spark服務,然後安裝zookeeper,並啓動zookeeper

集羣規劃:

主機名 IP地址 啓動程序
master.hadoop 192.168.1.2 zookeeper、master、worker
slave1.hadoop 192.168.1.3 zookeeper、master、worker
slave2.hadoop 192.168.1.4 zookeeper、worker

一、先安裝Spark集羣(Spark2.2.0安裝教程
二、安裝zookeeper:(zookeeper安裝教程)

三、高可用配置

在spark-env.sh上刪掉SPARK_MASTER_IP配置項,並添加以下內容:

export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=master.hadoop:2181,slave1.hadoop:2181,slave2.hadoop:2181 -Dspark.deploy.zookeeper.dir=/spark"

解釋:

#-Dspark.deploy.recoverMode=ZOOKEEPER #代表發生故障使用zookeeper服務

#-Dspark.depoly.zookeeper.url=master.hadoop,slave1.hadoop,slave1.hadoop #主機名的名字

#-Dspark.deploy.zookeeper.dir=/spark #spark要在zookeeper上寫數據時的保存目錄

[root@master conf]# vi spark-env.sh 
export JAVA_HOME=/apps/jdk1.8.0_171
export SCALA_HOME=/apps/scala-2.11.7
#export HADOOP_HOME=/apps/hadoop-2.8.0/
#export HADOOP_CONF_DIR=/apps/hadoop-2.8.0/etc/hadoop
#export SPARK_MASTER_IP=master.hadoop
export SPARK_WORKER_MEMORY=512m
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=master.hadoop:2181,slave1.hadoop:2181,slave2.hadoop:2181 -Dspark.deploy.zookeeper.dir=/spark"

 

然後修改slaves文件

[root@master conf]# vi slaves
# A Spark Worker will be started on each of the machines listed below.
master.hadoop
slave1.hadoop
slave2.hadoop

四、啓動

先啓動zookeeper

可以單臺啓動,也可以寫一個啓動腳本,集體啓動。

在每臺機器上執行該命令:

[root@master /]# zkServer.sh start

啓動腳本:https://blog.csdn.net/nuc2015/article/details/81045941

啓動後一個leader,其他的是flower

 

在第一臺機器上啓動spark

[root@master spark-2.2.0]# sbin/start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /apps/spark-2.2.0/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.hadoop.out
master.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /apps/spark-2.2.0/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.hadoop.out
slave1.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /apps/spark-2.2.0/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.hadoop.out
slave2.hadoop: starting org.apache.spark.deploy.worker.Worker, logging to /apps/spark-2.2.0/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.hadoop.out
[root@master spark-2.2.0]# jps
2321 Jps
2149 Worker
2028 QuorumPeerMain
2076 Master
[root@master spark-2.2.0]# 

在第二臺機器上單獨啓動master

[root@slave1 spark-2.2.0]# sbin/start-master.sh 

查看web端口

第一個master爲

  • Status: ALIVE

第二個master爲

  • Status: STANDBY

搭建成功

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章