(1)操作系統RHEL6.2-64
(2)兩個節點:spark1(192.168.232.147),spark2(192.168.232.152)
(3)兩個節點上都裝好了Hadoop 2.2集羣
2.安裝Zookeeper
(1)下載Zookeeper:http://apache.claz.org/zookeeper ... keeper-3.4.5.tar.gz
(2)解壓到/root/install/目錄下
(3)創建兩個目錄,一個是數據目錄,一個日誌目錄
(4)配置:進到conf目錄下,把zoo_sample.cfg修改成zoo.cfg(這一步是必須的,否則zookeeper不認識zoo_sample.cfg),並添加如下內容
dataDir=/root/install/zookeeper-3.4.5/data dataLogDir=/root/install/zookeeper-3.4.5/logs server.1=spark1:2888:3888 server.2=spark2:2888:3888
cd /root/install/zookeeper-3.4.5/data echo 1>myid
scp -r /root/install/zookeeper-3.4.5 root@spark2:/root/install/
cd /root/install/zookeeper-3.4.5/data echo 2>myid
cd /root/install/zookeeper-3.4.5 bin/zkServer.sh start
[root@spark2 zookeeper-3.4.5]# bin/zkServer.sh start JMX enabled by default Using config: /root/install/zookeeper-3.4.5/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@spark2 zookeeper-3.4.5]# jps 2490 Jps 2479 QuorumPeerMain
(1)進到spark的配置目錄,在spark-env.sh修改如下
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=spark1:2181,spark2:2181 -Dspark.deploy.zookeeper.dir=/spark" export JAVA_HOME=/root/install/jdk1.7.0_21 #export SPARK_MASTER_IP=spark1 #export SPARK_MASTER_PORT=7077 export SPARK_WORKER_CORES=1 export SPARK_WORKER_INSTANCES=1 export SPARK_WORKER_MEMORY=1g
scp spark-env.sh root@spark2:/root/install/spark-1.0/conf/
[root@spark1 spark-1.0]# sbin/start-all.sh starting org.apache.spark.deploy.master.Master, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-spark1.out spark1: starting org.apache.spark.deploy.worker.Worker, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-spark1.out spark2: starting org.apache.spark.deploy.worker.Worker, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-spark2.out
[root@spark2 spark-1.0]# sbin/start-master.sh starting org.apache.spark.deploy.master.Master, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-spark2.out
[root@spark1 spark-1.0]# jps 5797 Worker 5676 Master 6287 Jps 2602 QuorumPeerMain [root@spark2 spark-1.0]# jps 2479 QuorumPeerMain 5750 Jps 5534 Worker 5635 Master
(1)先查看一下兩個節點的運行情況,現在spark1運行了master,spark2是待命狀態
(2)在spark1上把master服務停掉
[root@spark1 spark-1.0]# sbin/stop-master.sh stopping org.apache.spark.deploy.master.Master [root@spark1 spark-1.0]# jps 5797 Worker 6373 Jps 2602 QuorumPeerMain
(4)再用瀏覽器訪問查看spark2的狀態,從下圖看出,spark2已經被切換當master了