安裝Linux、JDK等等
解壓:tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz -C ~/training/
由於Spark的腳本命令和Hadoop有衝突,只設置一個即可(不能同時設置)
配置文件:/root/training/spark-2.1.0-bin-hadoop2.7/conf/spark-env.sh
/root/training/spark-2.1.0-bin-hadoop2.7/conf/slaves
(1)僞分佈: bigdata11
spark-env.sh
export JAVA_HOME=/root/training/jdk1.8.0_144
export SPARK_MASTER_HOST=bigdata11
export SPARK_MASTER_PORT=7077
slaves
bigdata111
啓動:sbin/start-all.sh
Spark Web Console(內置Tomcat:8080) http://ip:8080
(2)全分佈:三臺
Master節點: bigdata12
Worker從節點:bigdata13 bigdata14
spark-env.sh
export JAVA_HOME=/root/training/jdk1.8.0_144
export SPARK_MASTER_HOST=bigdata112
export SPARK_MASTER_PORT=7077
slaves
bigdata13
bigdata14
複製到從節點上
scp -r spark-2.1.0-bin-hadoop2.7/ root@bigdata13:/root/training
scp -r spark-2.1.0-bin-hadoop2.7/ root@bigdata14:/root/training
在主節點上啓動: sbin/start-all.sh
注意:
全分佈式環境時,slaves.template 務必重命名一份slaves,在slaves 編輯從節點地址,不然全分佈式環境搭建失敗.(來自血淋淋的教訓)