spark環境搭建下---Spark集羣搭建

本文接着上一篇的博客“spark環境搭建上---Hadoop集羣搭建”進行。本文主要介紹scala以及spark的安裝與搭建。

七.scala安裝

1.下載

我的安裝的scala爲scala-2.12.8

https://downloads.lightbend.com/scala/2.12.8/scala-2.12.8.tgz

2.安裝參考

scala安裝在/opt目錄下

https://jingyan.baidu.com/article/215817f7ae90e01eda142312.html

八.spark安裝

1.下載

本文安裝的spark爲spark-2.3.3,安裝目錄在/opt目錄下

我的安裝的spark爲spark-2.3.3

https://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.3.3/spark-2.3.3-bin-hadoop2.7.tgz

2.配置spark-env.sh

step01:將spark-env.sh.template複製成spark-env.sh

cp /opt/spark-2.3.3-bin-hadoop2.7/conf/spark-env.sh.template /opt/spark-2.3.3-bin-hadoop2.7/conf/spark-env.sh

step02:在終端窗口中輸入命令,gedit /opt/spark-2.3.3-bin-hadoop2.7/conf/spark-env.sh,在裏面輸入:

export HADOOP_CONF_DIR=/opt/hadoop-2.8.5
export JAVA_HOME=/usr/java_8/jdk1.8.0_211
export SCALA_HOME=/opt/scala-2.12.8
export SPARK_MASTER_IP=192.168.149.132
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_WORKER_PORT=7078
export SPARK_WORKER_WEBUI_PORT=8081
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=2g
export SPARK_JAR=/opt/spark-2.3.3-bin-hadoop2.7/jars/*.jars

3.配置spark-defaults.conf複製成spark-defaults.conf

step01:將spark-defaults.conf.template複製成spark-defaults.conf

cp /opt/spark-2.3.3-bin-hadoop2.7/conf/spark-defaults.conf.template /opt/spark-2.3.3-bin-hadoop2.7/conf/spark-defaults.conf

step02:在終端窗口中輸入命令, /opt/spark-2.3.3-bin-hadoop2.7/conf/spark-defaults.conf,在裏面輸入:

spark.master=spark://192.168.149.132:7077

4.配置slaves

step01:將slaves.template複製成slaves

cp /opt/spark-2.3.3-bin-hadoop2.7/conf/slaves.template /opt/spark-2.3.3-bin-hadoop2.7/conf/slaves

在終端窗口中輸入命令, /opt/spark-2.3.3-bin-hadoop2.7/conf/spark-defaults.conf,在裏面輸入:

192.168.149.133

192.168.149.134

5.將master上安裝好的spark拷貝到slave01,slave02上

scp -r /opt/spark-2.3.3-bin-hadoop2.7 slave01:/opt/

scp -r /opt/spark-2.3.3-bin-hadoop2.7 slave02:/opt/

6.配置系統環境

step01:在終端窗口中輸入命令,gedit /etc/profile,在裏面輸入:

export SPARK_HOME=/opt/spark-2.3.3-bin-hadoop2.7

export PATH=$PATH:$SPARK_HOME/bin

step02:在終端窗口中輸入命令,source /etc/profile,使配置生效

7.啓動spark集羣

step01:在終端窗口中輸入命令,sh /opt/hadoop-2.8.5/sbin/start-all.sh,啓動hadoop。

step02:在終端窗口中輸入命令,sh /opt/spark-2.3.3-bin-hadoop2.7/sbin/start-master.sh以及sh /opt/spark-2.3.3-bin-hadoop2.7/sbin/start-slaves.sh,啓動spark

注意:在啓動之前確保登錄用戶(本文的登錄用戶爲deamon),對/opt/hadoop-2.8.5/log目錄中的文件擁有讀寫權限(chmod 777 /opt/hadoop-2.8.5/log)

8.驗證hadoop是否啓動成功

step01:在master,slave01,slave02上輸入jps,驗證相關進程是否啓動

step01:在瀏覽器上輸入192.168.149.132:8080

九.總結

spark安裝環境就此安裝完畢

 

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章