尚學堂大數據學習筆記(二) CentOS6.5 + JDK8 + hadoop2.6.5 搭建Hadoop集羣

文章目錄

尚學堂大數據學習筆記(二) CentOS6.5 + JDK8 + hadoop2.6.5 安裝配置HDFS

1. 安裝CentOS6.5系統

工具vmware + CentOS-6.5-x86_64-bin-DVD1.iso鏡像

1.1創建CentOS6.5系統虛擬機

1.1虛擬機創建

略過,可查找其他文章
使用的虛擬機鏡像:在這裏插入圖片描述

1.2 開啓ssh服務

service sshd start

1.3 更改hostname

node1、node2、node3、node4改成相應名稱

vi /etc/sysconfig/network

在這裏插入圖片描述

2 配置Host

  修改hosts文件(node1、node2、node3、node4)
  vi /etc/hosts

在這裏插入圖片描述

3. 安裝配置JDK8

(所有節點:node1、node2、node3、node4)

3.1 上傳jdk

[外鏈圖片轉存失敗(img-dNNlPafG-1568773014968)(assets/1568687103215.png)]

3.2 解壓

使用下面命令解壓jdk包

tar -zxvf xxxxxx.tar.gz

解壓完之後:

[外鏈圖片轉存失敗(img-Gt60pHsq-1568773014969)(assets/1568687296529.png)]

3.3 將解壓後的jdk包拷貝到/usr/java目錄

[root@node1 ~]# mkdir /usr/java
[root@node1 ~]# mv jdk1.8.0_211/ /usr/java/


結果:
[root@node1 ~]# cd /usr/java/
[root@node1 java]# ll
total 4
drwxr-xr-x. 7 uucp 143 4096 Apr  1 20:51 jdk1.8.0_211
[root@node1 java]#
[root@node1 java]# cd jdk1.8.0_211/
[root@node1 jdk1.8.0_211]# pwd
/usr/java/jdk1.8.0_211

3.4 在所有節點上配置Java 環境

node1、node2、node3、node4

[root@node1 jdk1.8.0_211]# vi /etc/profile
[root@node1 jdk1.8.0_211]# 

# 在末尾處新增
export JAVA_HOME=/usr/java/jdk1.8.0_211
export PATH=$PATH:$JAVA_HOME/bin

# 重啓
. /etc/profile

3.5 將node1發送到其他節點node2、node3、node4

cd /usr/java/jdk1.8.0_211

在node2、node3、node4的/usr目錄下創建java
mkdir /usr/java

拷貝jdk
[root@node1 jdk1.8.0_211]# scp -r /usr/java/jdk1.8.0_211/ node2:`pwd`
[root@node1 jdk1.8.0_211]# scp -r /usr/java/jdk1.8.0_211/ node3:`pwd`
[root@node1 jdk1.8.0_211]# scp -r /usr/java/jdk1.8.0_211/ node4:`pwd`

4.關閉防火牆

# 永久關閉防火牆
chkconfig iptables off
# 展示關閉防火牆
service iptables stop

5.配置免密SSH

# 在node1、node2、node3、node4生成密鑰

[root@node1 jdk1.8.0_211]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
62:12:00:8f:dd:36:56:1e:e8:b8:24:07:c5:d8:cc:62 root@node1
The key's randomart image is:
+--[ RSA 2048]----+
|.Oo  .o          |
|+E=o.o .         |
|oooo* .          |
|. +o.o           |
| + .. o S        |
|  .  o .         |
|                 |
|                 |
|                 |
+-----------------+
[root@node1 jdk1.8.0_211]# 

# 處理密鑰

[root@node1 jdk1.8.0_211]# cd ~
[root@node1 ~]# cd .ssh/
[root@node1 .ssh]# ll
total 12
-rw-------. 1 root root 1675 Sep 16 19:39 id_rsa
-rw-r--r--. 1 root root  392 Sep 16 19:39 id_rsa.pub
-rw-r--r--. 1 root root  397 Sep 16 12:49 known_hosts
[root@node1 .ssh]# cat id_rsa.pub >> authorized_keys
[root@node1 .ssh]# ll
total 16
-rw-r--r--. 1 root root  392 Sep 16 19:42 authorized_keys
-rw-------. 1 root root 1675 Sep 16 19:39 id_rsa
-rw-r--r--. 1 root root  392 Sep 16 19:39 id_rsa.pub
-rw-r--r--. 1 root root  397 Sep 16 12:49 known_hosts
[root@node1 .ssh]# cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAy6IWsRhhBySt64m29Ezk0qJXpa5knI/xvw2R6rXwcfxA3sXHQYDZ4bUFCEgofQe99Kw5iCN0aztvUm6v/wSbn/5eR6Fu/gjVcC4siYOhGrKkhNLLzIkrVfba1qYjEzGmpZdA9mRMsNxqpZ7/8D3y5qXuIhqgOooggUiB7EcVjIfIUUL2k8XDHPI8CJwyNskjm+vtjxqP3f73hZBFuS4ozPAQLEM9gXQHW6kAXJn8AB2ukxxnvs1spEdHgtsFURl0U45BjjIm5Di7eUhxLJ6+E06k62XGWQcfbvIpiEYeol0FGPaE0H/3KhUwvoDM+wU6gvRu1J0T5PkWgJasBPAy8w== root@node1
[root@node1 .ssh]# 


# 配置免密
[root@node1 .ssh]# scp ./id_rsa.pub root@node2:`pwd`/node1.pub
[root@node1 .ssh]# scp ./id_rsa.pub root@node3:`pwd`/node1.pub
[root@node1 .ssh]# scp ./id_rsa.pub root@node4:`pwd`/node1.pub
[root@node2 .ssh]# cat node1.pub >> authorized_keys
[root@node3 .ssh]# cat node1.pub >> authorized_keys
[root@node4 .ssh]# cat node1.pub >> authorized_keys

[root@node2 .ssh]# scp ./id_rsa.pub root@node1:`pwd`/node2.pub
[root@node2 .ssh]# scp ./id_rsa.pub root@node3:`pwd`/node2.pub
[root@node2 .ssh]# scp ./id_rsa.pub root@node4:`pwd`/node2.pub
[root@node1 .ssh]# cat node2.pub >> authorized_keys
[root@node3 .ssh]# cat node2.pub >> authorized_keys
[root@node4 .ssh]# cat node2.pub >> authorized_keys

[root@node3 .ssh]# scp ./id_rsa.pub root@node1:`pwd`/node3.pub
[root@node3 .ssh]# scp ./id_rsa.pub root@node2:`pwd`/node3.pub
[root@node3 .ssh]# scp ./id_rsa.pub root@node4:`pwd`/node3.pub
[root@node1 .ssh]# cat node3.pub >> authorized_keys
[root@node2 .ssh]# cat node3.pub >> authorized_keys
[root@node4 .ssh]# cat node3.pub >> authorized_key

[root@node4 .ssh]# scp ./id_rsa.pub root@node1:`pwd`/node4.pub
[root@node4 .ssh]# scp ./id_rsa.pub root@node2:`pwd`/node4.pub
[root@node4 .ssh]# scp ./id_rsa.pub root@node3:`pwd`/node4.pub
[root@node1 .ssh]# cat node4.pub >> authorized_keys
[root@node2 .ssh]# cat node4.pub >> authorized_keys
[root@node3 .ssh]# cat node4.pub >> authorized_key

6. 設置集羣之間時間同步

該部分轉自https://blog.csdn.net/know9163/article/details/81141203

集羣時間同步:在集羣中找一臺機器(node1,這裏的node1就是任意一臺機器,也可以寫對應的IP地址),然後集羣中的其他機器與node1 每十分鐘同步一次。

步驟:

  1. rpm -qa | grep ntp 查看ntp 和ntpdate 是否安裝
[root@node1 share]# rpm -qa | grep ntp
fontpackages-filesystem-1.41-1.1.el6.noarch
ntpdate-4.2.4p8-3.el6.centos.x86_64
ntp-4.2.4p8-3.el6.centos.x86_64
  1. vi /etc/ntp.conf 需要修改三處

​ a 打開一個註釋 ,192.168.1.0 是node1機器上的網關。

# Hosts on local network are less restricted.
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

改成自己的網段192.168.x.0

​ b 將server0 ,server1 ,server2 ,server3 註釋掉

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org
#server 1.centos.pool.ntp.org
#server 2.centos.pool.ntp.org
#server 3.centos.pool.ntp.org

​ c 打開兩個註釋,server 和 fudge

# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10
  1. vi /etc/sysconfig/ntpd 加上SYNC_HWCLOCK=yes
# Drop root to id 'ntp:ntp' by default.
SYNC_HWCLOCK=yes
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -g"
  1. chkconfig ntpd on 將ntp 永久開啓

  2. service ntpd start 後,可以查看狀態 service ntpd status

  3. crontab -e 編寫定時器同步時間, 意義:每十分鐘與node1 同步一次時間。需要在集羣中其他的機器中都編寫 crontab -e

## sync cluster time
## 分 時 日 月 周 這裏是每十分鐘同步

0-59/10 * * * * /usr/sbin/ntpdate node1
  1. ntpdate node1 然後就可以手動先同步一下時間.

7. 配置HDFS

7.0 在Windowd上配置單機Hadoop

//… 省略

7.1 配置僞分佈式HDFS

在單節點上配置

1. 上傳Hadoop文件

在這裏插入圖片描述

2. 解壓文件

cd到上傳文件目錄 cd /opt/sxt
使用 tar -zxvf hadoop-2.6.5.tar.gz 解壓

tar -zxvf xxxx.tar.gz 解壓

[root@node1 ~]# ll
total 369816
-rw-------. 1 root root      3320 Sep 16 07:26 anaconda-ks.cfg
drwxr-xr-x. 2 root root      4096 Sep 16 07:32 Desktop
drwxr-xr-x. 2 root root      4096 Sep 16 07:32 Documents
drwxr-xr-x. 2 root root      4096 Sep 16 07:32 Downloads
drwxr-xr-x. 9 root root      4096 May 24  2017 hadoop-2.6.5
-rw-r--r--. 1 root root 183594876 Sep 16 19:45 hadoop-2.6.5.tar.gz
-rw-r--r--. 1 root root     41364 Sep 16 07:25 install.log
-rw-r--r--. 1 root root      9154 Sep 16 07:23 install.log.syslog
-rw-r--r--. 1 root root 194990602 Sep 16 19:23 jdk-8u211-linux-x64.tar.gz
drwxr-xr-x. 2 root root      4096 Sep 16 07:32 Music
drwxr-xr-x. 2 root root      4096 Sep 16 07:32 Pictures
drwxr-xr-x. 2 root root      4096 Sep 16 07:32 Public
drwxr-xr-x. 2 root root      4096 Sep 16 07:32 Templates
drwxr-xr-x. 2 root root      4096 Sep 16 07:32 Videos
[root@node1 ~]# 
[root@node1 ~]# mv hadoop-2.6.5 /opt/sxt/
[root@node1 ~]# cd /opt/sxt/
[root@node1 sxt]# ll
total 4
drwxr-xr-x. 9 root root 4096 May 24  2017 hadoop-2.6.5
[root@node1 sxt]# 

3. 修改配置文件

1. 修改profile文件

vi /etc/profile
在這裏插入圖片描述
重新加載配置文件
source /etc/profile

vi /etc/profile

# 添加hadoop配置
export JAVA_HOME=/usr/java/jdk1.8.0_211
export HADOOP_HOME=/opt/sxt/hadoop-2.6.5
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

# 重新加載profile文件
[root@node1 hadoop-2.6.5]# . /etc/profile
[root@node1 hadoop-2.6.5]# hd
hdfs             hdfs.cmd         hdfs-config.cmd  hdfs-config.sh   hdparm 

2.修改hadoop-env.sh

先cd到解壓後的hadoop目錄裏面
vi etc/hadoop/hadoop-env.sh

在這裏插入圖片描述
修改JAVA_HOME爲自己配置的JAVA_HOME

[root@node1 hadoop-2.6.5]# vi etc/hadoop/hadoop-env.sh 

# 修改裏面JAVA_HOME配置

# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8.0_211

3. 修改core-site.xml

vi etc/hadoop/core-site.xml
在這裏插入圖片描述

vi etc/hadoop/core-site.xml

<property>
    <name>fs.defaultFS</name>
    <value>hdfs://node1:9000</value>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/var/sxt/hadoop/local</value>
</property>

4. 修改hdfs-site.xml

在這裏插入圖片描述

[root@node1 hadoop-2.6.5]#  vi etc/hadoop/hdfs-site.xml

<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>node1:50090</value>
</property>

replication表示副本數, 僞分佈式要設置爲1

5. 修改slaves

vi etc/hadoop/slaves

在這裏插入圖片描述

[root@node1 hadoop-2.6.5]# vi etc/hadoop/slaves 

node1

6. 格式化NameNode

hdfs namenode -format

在這裏插入圖片描述
看到這句證明成功

[root@node1 hadoop-2.6.5]# hdfs namenode -format

7. 啓動

start-dfs.sh

在這裏插入圖片描述

輸入jps能看到以下效果代表啓動成功

在這裏插入圖片描述

[root@node1 current]# start-dfs.sh 
Starting namenodes on [node1]
The authenticity of host 'node1 (192.168.219.167)' can't be established.
RSA key fingerprint is 6c:5a:b4:9a:9e:e1:27:99:9c:34:66:5c:d5:93:d0:72.
Are you sure you want to continue connecting (yes/no)? yes
node1: Warning: Permanently added 'node1,192.168.219.167' (RSA) to the list of known hosts.
node1: starting namenode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-namenode-node1.out
node1: starting datanode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-datanode-node1.out
Starting secondary namenodes [node1]
node1: starting secondarynamenode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-node1.out
[root@node1 current]# jps
3490 Jps
3220 DataNode
3143 NameNode
3327 SecondaryNameNode
[root@node1 current]# 

8. 瀏覽器訪問測試

在本機上使用瀏覽器訪問 http://IP:50070可以看到:
在這裏插入圖片描述

7.2 配置完全分佈式HDFS

1、 配置node2、node3、node4的/etc/profile
# 添加配置
export JAVA_HOME=/usr/java/jdk1.8.0_211
export HADOOP_HOME=/opt/sxt/hadoop-2.6.5
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

在這裏插入圖片描述

1. 修改master上的 core-site.xml

在這裏插入圖片描述

[root@node1 hadoop-2.6.5]# vi etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://node1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/var/sxt/hadoop/full</value>
    </property>
</configuration>

2. 修改master上的hdfs-site.xml

在這裏插入圖片描述

[root@node1 hadoop-2.6.5]# vi etc/hadoop/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>node2:50090</value>
    </property>
</configuration>

3. 修改master上的slaves

在這裏插入圖片描述

[root@node1 hadoop-2.6.5]# vi etc/hadoop/slaves 

node2
node3
node4

4. 使用scp命令將整個hadoop項目拷貝分發給所有子節點(node2,node3,node4)的相同目錄下

先cd到master的opt目錄下
scp -r ./sxt/ root@node2:/opt/
scp -r ./sxt/ root@node3:/opt/
scp -r ./sxt/ root@node4:/opt/

5.啓動

  1. hdfs namenode -format
  2. start-dfs.sh
  3. jps 使用jps查看相應模塊是否啓動
  4. 使用瀏覽器訪問測試,masterip:50070

7.3 搭建高可用HA

在這裏插入圖片描述

1. 配置Zookeeper

node2、node3、node4

1. 上傳並解壓Zookeeper

解壓到/opt/sxt/下
tar -zxvf zookeeper-3.4.6.tar.gz -C /opt/sxt/

2.配置zookeeper環境(三個節點都需要)

vi /etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_211
export HADOOP_HOME=/opt/sxt/hadoop-2.6.5
export ZOOKEEPER_HOME=/opt/sxt/zookeeper-3.4.6
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin

source /etc/profile

3. 修改zookeeper配置文件

先cd到conf目錄下,拷貝一份配置文件,重命名爲zoo.cfg:cp zoo_sample.cfg zoo.cfg
修改配置文件zoo.cfg

1.修改數據地址
# example sakes.
dataDir=/var/sxt/zk

2.在末尾追加
#autopurge.purgeInterval=1
server.1=192.168.219.155:2888:3888
server.2=192.168.219.156:2888:3888
server.3=192.168.219.157:2888:3888

4. 創建先前定義的zookeeper文件目錄以及myid文件

# 1. 創建目錄
mkdir -p /var/sxt/zk

# 創建myid文件,寫入先前配置在末爲追加的server.id ,當前服務器對應的是server.1所以寫入1
echo 1 > /var/sxt/zk/myid

5. 拷貝node2當前配置好的zookeeper項目到其他節點node3,node4

先cd到/opt/sxt/目錄下

scp -r ./zookeeper-3.4.6/ node3:`pwd`
scp -r ./zookeeper-3.4.6/ node4:`pwd`

6. 在每個節點上各自執行4步驟

# 1. 創建目錄
mkdir -p /var/sxt/zk

# 創建myid文件,寫入先前配置在末爲追加的server.id ,當前服務器對應的是server.1所以寫入1
echo 1 > /var/sxt/zk/myid

7.啓動zookeeper

啓動順序 node2,node3,node4

啓動zookeeper
zkServer.sh start

查看zookeeper狀態
zkServer.sh status

2.配置HDFS

1. master配置hdfs-site.xml,按照自身配置修改節點名稱,公鑰名稱

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>node1:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>node2:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>node1:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>node2:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://node1:8485;node2:8485;node3:8485/mycluster</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/var/sxt/hadoop/ha/jn</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>
<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/root/.ssh/id_rsa</value>
</property>
<property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
</property>
</configuration>

2. master修改core-site.xml

hadoop.tmp.dir必須是一個空目錄

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://mycluster</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/var/sxt/hadoop/ha</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node2:2181,node3:2181,node4:2181</value>
     </property>
</configuration>

3. master分發配置文件到所有子節點

scp hdfs-site.xml core-site.xml node2:`pwd`
scp hdfs-site.xml core-site.xml node3:`pwd`
scp hdfs-site.xml core-site.xml node4:`pwd`

4. 手動啓動journalnode(master,node2,node3)

hadoop-daemon.sh start journalnode

使用jps查看journalnode是否啓動成功

5.格式化

1. 在master上格式化namenode
hdfs namenode -format
2. 啓動namenode
hadoop-daemon.sh start namenode
3. 啓動完之後使用jps查看是否啓動成功
[root@node1 hadoop]# jps
4721 Jps
4521 JournalNode
4651 NameNode
4. 對node2,node3進行同步

分別在node2,node3節點執行同步命令

hdfs namenode -bootstrapStandby

可以看到下面信息,則代表成功:

······
19/01/31 15:33:32 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
=====================================================
About to bootstrap Standby ID nn2 from:
           Nameservice ID: mycluster
        Other Namenode ID: nn1
  Other NN's HTTP address: http://node1:50070
  Other NN's IPC  address: node1/192.168.219.154:8020
             Namespace ID: 1181164627
            Block pool ID: BP-2019459657-192.168.219.154-1548919768243
               Cluster ID: CID-ab317192-2eb3-42a4-81bf-b2f6e15454bc
           Layout version: -60
       isUpgradeFinalized: true
=====================================================
19/01/31 15:33:33 INFO common.Storage: Storage directory /var/sxt/hadoop/ha/dfs/name has been successfully formatted.

······

6. 使用ZKFC

1. 在node4節點上啓動zookeeper客戶端
zkCli.sh
2. 選擇任意一個節點格式化zkfc
hdfs zkfc -formatZK

可以看到:

······
19/01/31 15:39:43 INFO ha.ActiveStandbyElector: Session connected.
19/01/31 15:39:43 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
19/01/31 15:39:43 INFO zookeeper.ClientCnxn: EventThread shut down
19/01/31 15:39:43 INFO zookeeper.ZooKeeper: Session: 0x368a2b711ff0001 closed

3. 在node4上使用ls /可以看到創建了一個hadoop目錄
[zk: localhost:2181(CONNECTED) 8] ls /                   
[zookeeper, hadoop-ha]

6. master啓動HDFS

start-dfs.sh

可以看到:

[root@node1 hadoop]# start-dfs.sh 
Starting namenodes on [node1 node2]
node1: starting namenode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-namenode-node1.out
node2: starting namenode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-namenode-node2.out
node2: starting datanode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-datanode-node2.out
node3: starting datanode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-datanode-node3.out
node4: starting datanode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-datanode-node4.out
Starting journal nodes [node1 node2 node3]
node2: starting journalnode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-journalnode-node2.out
node3: starting journalnode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-journalnode-node3.out
node1: starting journalnode, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-journalnode-node1.out
Starting ZK Failover Controllers on NN hosts [node1 node2]
node1: starting zkfc, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-zkfc-node1.out
node2: starting zkfc, logging to /opt/sxt/hadoop-2.6.5/logs/hadoop-root-zkfc-node2.out

8. 搭建Yarn

在這裏插入圖片描述

1. 修改master的mapred-site.xml

先拷貝一份,重命名爲mapred-site.xml

 cp mapred-site.xml.template mapred-site.xml

修改mapred-site.xml配置

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

2.修改master的yarn-site.xml

vi yarn-site.xml

添加如下配置,依照自己情況修改:

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>cluster1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>node3</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>node4</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>node2:2181,node3:2181,node4:2181</value>
    </property>
</configuration>

3. 將master配置好的文件分發給其他節點

scp mapred-site.xml yarn-site.xml node2:`pwd`
scp mapred-site.xml yarn-site.xml node3:`pwd`
scp mapred-site.xml yarn-site.xml node4:`pwd`

4. 啓動Yarn

start-yarn.sh

可以看到:

starting yarn daemons
starting resourcemanager, logging to /opt/sxt/hadoop-2.6.5/logs/yarn-root-resourcemanager-node1.out
node4: starting nodemanager, logging to /opt/sxt/hadoop-2.6.5/logs/yarn-root-nodemanager-node4.out
node3: starting nodemanager, logging to /opt/sxt/hadoop-2.6.5/logs/yarn-root-nodemanager-node3.out
node2: starting nodemanager, logging to /opt/sxt/hadoop-2.6.5/logs/yarn-root-nodemanager-node2.out

當前命令只能啓動nodemanager的節點,若是要啓動node3,node4則需要在相應節點上手動啓動。
分別在node3,node4執行以下命令

yarn-daemon.sh start resourcemanager

使用jps查看ResourceManager,NodeManager是否啓動成功

[root@node3 ~]# jps
4434 NodeManager
3714 JournalNode
3635 DataNode
4793 Jps
4570 ResourceManager
3231 QuorumPeerMain

5. 測試

使用本機瀏覽器訪問node3:8088 或 node4:8088
可以看到:
在這裏插入圖片描述
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章