docker+centos7啓動spark2.4.5+hadoop2.10.0集羣 for macOS

一、製作鏡像

1. centos 容器

# 下載centos鏡像 
docker pull centos
# 創建一個容器
 docker run --name centos -itd centos /bin/bash
# 進入已啓動的容器
docker attach centos

2.下載安裝包

# 安裝wget、ssh等基礎服務
yum install -y net-tools which openssh-clients openssh-server iproute.x86_64 wget passwd
# 下載jdk1.8:https://www.oracle.com/cn/java/technologies/javase-jdk8-downloads.html
wget https://download.oracle.com/otn/java/jdk/8u231-b11/5b13a193868b4bf28bcb45c792fce896/jdk-8u231-linux-x64.tar.gz?AuthParam=1586404625_30c185b984c1c247e5e9c10cb056d0a5
# 下載hadoop2.10:https://hadoop.apache.org/old/releases.html
wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.10.0/hadoop-2.10.0.tar.gz
# 下載spark2.4:https:https://spark.apache.org/downloads.html
wget https://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.4.5/spark-2.4.5-bin-without-hadoop.tgz
# 下載scala2.12
wget https://downloads.lightbend.com/scala/2.12.3/scala-2.12.3.tgz

3.配置ssh

#修改root密碼
passwd

修改配置文件

# 修改ssh配置文件
sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config
# 重啓ssh服務
systemctl start sshd.service
#這裏會報錯
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
#直接進行下面的命令就行,使用最後一步的命令啓動就解決這個問題了

#退出容器
exit
# 將剛剛修改的容器保存爲新的鏡像
docker commit 6a5967a064bc my-ssh-centos
#啓動容器(--privileged=true 和後面的 /sbin/init 必須要有,以特權模式啓動容器,否則無法使用systemctl啓動服務)
docker run -tid --privileged  --name my-ssh-centos my-ssh-centos /usr/sbin/init
#進入容器
 docker exec -it my-ssh-centos /bin/bash 

設置ssh免密登錄

cd ~;ssh-keygen -t rsa -P '' -f ~/.ssh/id_dsa;cd .ssh;cat id_dsa.pub >> authorized_keys

4.安裝jdk

#創建安裝目錄
mkdir /usr/local/java/
解壓文件至安裝目錄
tar -zxvf jdk-8u231-linux-x64.tar.gz -C /usr/local/java/
#設置環境變量
~/.bashrc中添加
export JAVA_HOME=/usr/local/java/jdk1.8.0_231
export JRE_HOME=$JAVA_HOME/jre
export PATH=$JAVA_HOME/bin:$PATH:$JRE_HOME/bin
#使環境變量生效
source ~/.bashrc

5.安裝scala

#創建安裝目錄
mkdir /usr/local/scala/
#解壓文件至安裝目錄
# tar -zxvf scala-2.12.3.tgz  -C /usr/local/scala/#設置環境變量
~/.bashrc中添加
export SCALA_HOME=/usr/local/scala/scala-2.12.3
export PATH=$PATH:$SCALA_HOME/bin#使環境變量生效
source ~/.bashrc

6.安裝hadoop

#創建安裝目錄
mkdir /usr/local/hadoop/
#解壓文件至安裝目錄
tar -zxvf hadoop-2.10.0.tar.gz  -C /usr/local/hadoop/
#設置環境變量
~/.bashrc中添加
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.10.0
export HADOOP_CONFIG_HOME=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export SPARK_DISK_CLASSPATH=$(hadoop classpath)
#使環境變量生效
source ~/.bashrc

創建hadoop集羣的相關目錄

cd $HADOOP_HOME;mkdir tmp;mkdir namenode;mkdir datanode;cd $HADOOP_CONFIG_HOME/

修改core-site.xml

<configuration>
    <property>
            <name>hadoop.tmp.dir</name>
            <value>/usr/local/hadoop/hadoop-2.10.0/tmp</value>
            <description>A base for other temporary directories.</description>
    </property>

    <property>
            <name>fs.default.name</name>
            <value>hdfs://master:9000</value>
            <final>true</final>
            <description>The name of the default file system. 
            A URI whose scheme and authority determine the 
            FileSystem implementation. The uri's scheme 
            determines the config property (fs.SCHEME.impl) 
            naming the FileSystem implementation class. The 
            uri's authority is used to determine the host,
            port, etc. for a filesystem.        
            </description>
    </property>
</configuration>

配置hdfs-site.xml,設置副本數和NameNode、DataNode的目錄路徑

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
        <final>true</final>
        <description>Default block replication.
        The actual number of replications can be specified when the file is created.
        The default is used if replication is not specified in create time.
        </description>
    </property>

    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/usr/local/hadoop/hadoop-2.10.0/namenode</value>
        <final>true</final>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/usr/local/hadoop/hadoop-2.10.0/datanode</value>
        <final>true</final>
    </property>
</configuration>

配置mapred-site.xml

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
        <description>The host and port that the MapReduce job tracker runs
        at.  If "local", then jobs are run in-process as a single map
        and reduce task.
        </description>
    </property>
</configuration>

配置yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
        <description>Whether virtual memory limits will be enforced for containers</description>
    </property>
</configuration>

格式化datanode

hadoop namenode -format

7.安裝spark 

#創建安裝目錄
mkdir /usr/local/spark/
#解壓文件至安裝目錄
tar -zxvf spark-2.4.5-bin-without-hadoop.tgz -C /usr/local/spark/
#~/spark.env.sh中添加
export SCALA_HOME=/usr/local/scala/scala-2.12.3
export JAVA_HOME=/usr/local/java/jdk1.8.0_231
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.10.0
export HADOOP_CONFIG_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
SPARK_MASTER_IP=master
SPARK_LOCAL_DIR=/usr/local/spark/spark-2.4.5-bin-without-hadoop
SPACK_DRIVER_MEMORY=1G
#~/slaves中添加
slave01
slave02

8.保存鏡像

docker commit -m "centos-7 with spark 2.4.5 and hadoop 2.10.0" 5d166889fd6c centos:with-saprk-hadoop

二、使用保存的鏡像,搭建集羣

1.啓動節點容器

#端口50070和8088,8080是用來在瀏覽器中訪問hadoop、yarn、spark的WEB界面,這裏分別映射到物理機的50070和8088,8080端口。
docker run -itd -P -p 50070:50070 -p 8088:8088 -p 8080:8080 --privileged --name master -h master --add-host slave01:172.17.0.7 --add-host slave02:172.17.0.8 centos:with-saprk-hadoop /usr/sbin/init
docker run -itd -P --privileged --name slave01 -h slave01 --add-host master:172.17.0.6 --add-host slave02:172.17.0.8  centos:with-saprk-hadoop /usr/sbin/init
docker run -itd -P --privileged --name slave02 -h slave02 --add-host master:172.17.0.6 --add-host slave01:172.17.0.7  centos:with-saprk-hadoop /usr/sbin/init

#進入容器
docker exec -it master /bin/bash  
#驗證主機信息是否正確
172.17.0.7	slave01
172.17.0.8	slave02
172.17.0.6	master
#測試ssh能否免密碼登陸
ssh root@slave01
ssh root@slave02

2.啓動集羣

進入master容器,啓動hadoop和spark集羣:

# 啓動hadoop
cd /usr/local/hadoop/hadoop-2.10.0/sbin/;sh start-all.sh 
#查看啓動項
jps
#1569 Jps
#1139 ResourceManager
#774 DataNode
#1258 NodeManager
#971 SecondaryNameNode
#621 NameNode  

#啓動spark
cd /usr/local/spark/spark-2.4.5-bin-without-hadoop/sbin; sh start-all.sh

三、web界面查看啓動結果

hadoop node管理界面(http://127.0.0.1:50070

瀏覽節點(http://127.0.0.1:8088/cluster

spark界面(http://127.0.0.1:8080/

四、遇到的問題

1.docker中的centos7裏面的ssh啓動會拋出如下異常

System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down

解決辦法:參考上面配置ssh部分

2.ssh配置後無法使用systemctl start sshd.service啓動或者啓動不成功

解決辦法:

#啓動容器時加上--privileged和/sbin/init ,以特權模式啓動容器,否則無法使用systemctl啓動服務)
例如:docker run -tid --privileged  --name my-ssh-centos my-ssh-centos /usr/sbin/init

3.啓動spark出現如下異常

Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/Logger
        at java.lang.Class.getDeclaredMethods0(Native Method)
        at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
        at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
        at java.lang.Class.getMethod0(Class.java:3018)
        at java.lang.Class.getMethod(Class.java:1784)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:715)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
————————————————
版權聲明:本文爲CSDN博主「A_ChunUnique」的原創文章,遵循 CC 4.0 BY-SA 版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/Gavin_chun/article/details/78554582

解決辦法:

spark-env.sh添加

export SPARK_DIST_CLASSPATH=$(hadoop classpath)

參考文章:

https://blog.csdn.net/GOGO_YAO/article/details/76863201

https://www.lagou.com/lgeduarticle/82724.html

https://spark.apache.org/docs/latest/hadoop-provided.html

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章