HDFS 3.x HA 分佈式安裝部署

一、安裝環境:centos7、hadoop-3.1.2、zookeeper-3.4.14、3個節點(192.168.56.60,192.168.56.62,192.168.56.64)。

centos60 centos62 centos64
NameNode NameNode  
Zookeeper Zookeeper Zookeeper
DataNode DataNode DataNode
JournalNodes JournalNodes JournalNodes

二、HA:高可用,本篇hdfs中的HA指的是namenode的高可用(hadoop2.x開始支持),因爲datanode一直都是支持HA的。這裏只記錄HA有關,普通分佈式安裝請先參考我的另一篇:hdfs分佈式安裝(超詳細),默認已經看過了。

三、HA原理簡述:

 

Zookeeper集羣能夠保證NamaNode服務高可用的原理是:Hadoop集羣中有兩個NameNode服務,兩個NaameNode都定時地給Zookeeper發送心跳,告訴Zookeeper我還活着,可以提供服務,單某一個時間只有一個是Action狀態,另外一個是Standby狀態,一旦Zookeeper檢測不到Action NameNode發送來的心跳後,就切換到Standby狀態的NameNode上,將它設置爲Action狀態,所以集羣中總有一個可用的NameNode,達到了NameNode的高可用目的。

爲了使備用節點保持其狀態與Active節點同步,兩個節點都與一組稱爲“JournalNodes”(JNs)的單獨守護進程通信。當Active節點執行任何名稱空間修改時,它會將修改記錄持久地記錄到大多數這些JNs中。待機節點能夠從JNs讀取編輯,並且不斷觀察它們對edit log的更改。當備用節點看到edit log 時,它會將它們應用到自己的命名空間。如果發生故障轉移,Standby將確保在將自身升級爲Active狀態之前已從JournalNodes讀取所有edit log 內容。這可確保在發生故障轉移之前完全同步命名空間狀態。
爲了提供快速故障轉移,備用節點還必須具有關於羣集中塊的位置的最新信息。爲了實現這一點,DataNode配置了所有NameNode的位置,並向所有人發送塊位置信息和心跳。

必須至少有3個JournalNode守護進程,因爲編輯日誌修改必須寫入大多數JN。這將允許系統容忍單個機器的故障。N JournalNodes運行時,系統最多可以容忍(N-1)/ 2個故障並繼續正常運行。
HA羣集中,備用NameNode還會執行命名空間狀態的檢查點,因此無需在HA羣集中運行Secondary NameNode。

hadoop3.x HA新特性:支持多個Standby狀態的namenode(2.x只支持一個)。那麼爲啥centos64服務器我不打算部署namenode?原因:3個JournalNode只允許一個JournalNode掛掉,所以這裏3臺服務器只能允許宕機一臺,這樣部署3個namenode就沒有意義了,因爲就不允許宕機2臺。不過服務器數量足夠時可部署3個namenode(官方文檔: HA的NameNode最小數量爲2,但您可以配置更多。由於通信開銷,建議不要超過5 - 推薦3個NameNodes),例如:5臺服務器時可部署5個JournalNode,3個namenode,這樣就允許宕機任意2臺服務器了。

四、運行中非HA的集羣的停止(我默認你是參照步驟二中博客部署的,官網有直接使非HA轉爲HA的,我想自己先部署一遍沒有使用)

4.1 停止集羣:

[root@centos60 hadoop]# jps
1682025 NameNode
1682520 DataNode
1262406 Jps
1683780 SecondaryNameNode
[root@centos60 hadoop]# stop-dfs.sh 
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Stopping namenodes on [centos60]
Last login: Mon Jul 29 11:15:34 CST 2019 from 192.168.56.133 on pts/9
Stopping datanodes
Last login: Mon Jul 29 16:14:02 CST 2019 on pts/1
Stopping secondary namenodes [centos60]
Last login: Mon Jul 29 16:14:04 CST 2019 on pts/1
[root@centos60 hadoop]# jps
1266880 Jps

4.2 刪除之前集羣的持久化文件,包含配置及數據:

[root@centos60 hadoop]# ls
datanode  namenode  tmp
[root@centos60 hadoop]# rm -rf *

[root@centos62 hadoop]# ls
datanode
[root@centos62 hadoop]# rm -rf *

[root@centos64 hadoop]# ls
datanode
[root@centos64 hadoop]# rm -rf *

五、HA下配置的修改

5.1 core-site.xml的修改:

[root@centos60 hadoop]# cd /usr/local/hadoop-3.1.2/
[root@centos60 hadoop-3.1.2]# ls
bin  etc  include  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share
[root@centos60 hadoop-3.1.2]# cd etc/hadoop/
[root@centos60 hadoop]# ls
capacity-scheduler.xml  hadoop-metrics2.properties        httpfs-signature.secret  log4j.properties            shellprofile.d                 yarn-env.sh
configuration.xsl       hadoop-policy.xml                 httpfs-site.xml          mapred-env.cmd              ssl-client.xml.example         yarnservice-log4j.properties
container-executor.cfg  hadoop-user-functions.sh.example  kms-acls.xml             mapred-env.sh               ssl-server.xml.example         yarn-site.xml
core-site.xml           hdfs-site.xml                     kms-env.sh               mapred-queues.xml.template  user_ec_policies.xml.template
hadoop-env.cmd          httpfs-env.sh                     kms-log4j.properties     mapred-site.xml             workers
hadoop-env.sh           httpfs-log4j.properties           kms-site.xml             root                        yarn-env.cmd
[root@centos60 hadoop]# vi core-site.xml
<configuration>
        <!-- nameservice ID,HA是連接到nameservice mycluster -->
        <property>
                <name>fs.defaultFS</name>
		<value>hdfs://mycluster</value>
        </property>
    <!-- hadoop臨時目錄地址,默認時,NameNode和DataNode的數據存在這個路徑下 -->
    <property>
      <name>hadoop.tmp.dir</name>
      <value>/hadoop/tmp</value>
    </property>
    <!-- zookeeper集羣地址 -->
    <property>
	   <name>ha.zookeeper.quorum</name>
	   <value>centos60:2181,centos62:2181,centos64:2181</value>
 </property>
</configuration>

5.2 hdfs-site.xml的修改:

<configuration>
  <!--指定hdfs的nameservice,需要和core-site.xml中的保持一致-->
  <property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
  </property>
  
  <!--mycluster下面有兩個NameNode-->
  <property>
    <name>dfs.ha.namenodes.mycluster</name>
    <value>centos60,centos62</value>
  </property>
  
  <!--RPC通信地址,rpc用來和datanode通訊 -->
  <property>
    <name>dfs.namenode.rpc-address.mycluster.centos60</name>
    <value>centos60:9000</value>
  </property>

  <!--Hadoop3開始http默認端口已經改爲9870,這裏爲了兼容之前的,還是設置成50070-->
  <property>
    <name>dfs.namenode.http-address.mycluster.centos60</name>
    <value>centos60:50070</value>
  </property>

  <property>
    <name>dfs.namenode.rpc-address.mycluster.centos62</name>
    <value>centos62:9000</value>
  </property>

  <property>
    <name>dfs.namenode.http-address.mycluster.centos62</name>
    <value>centos62:50070</value>
  </property>
  
  <!--標識NameNodes 讀寫 edits  的JNs 的URI-->
  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://centos60:8485;centos62:8485;centos64:8485/mycluster</value>
  </property>
  
  <!--指定JournalNode在本地磁盤存放數據的位置-->
  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/hadoop/journalnode</value>
  </property>
  
  <!--開啓NameNode故障時自動切換-->
  <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>

  <!--   HDFS客戶端用於聯繫Active NameNode的Java類 -->
  <property>
    <name>dfs.client.failover.proxy.provider.mycluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
    
  <!--用於在故障轉移期間屏蔽 Active NameNode,如果ssh是默認22端口,value直接寫sshfence即可-->
  <!--如果不是22端口,則寫sshfence(hadoop:22022),其中22022爲新的ssh端口號-->
  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
  </property>
  
  <!--ssh隔離機制時需要ssh免登陸-->
  <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/root/.ssh/id_rsa</value>
  </property>

  <!--namenode中name數據存放目錄-->
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/hadoop/namenode</value>
  </property>

  <!--datanode數據存放目錄-->
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/hadoop/datanode</value>
  </property>
  
  <!--數據備份數量-->
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  
</configuration>

5.3 修改好後的配置文件scp到另外2節點:

[root@centos60 hadoop]# scp core-site.xml root@centos62:/usr/local/hadoop-3.1.2/etc/hadoop/core-site.xml 
core-site.xml                                                                                                                               100% 1280     1.4MB/s   00:00    
[root@centos60 hadoop]# scp core-site.xml root@centos64:/usr/local/hadoop-3.1.2/etc/hadoop/core-site.xml 
core-site.xml                                                                                                                               100% 1280   289.1KB/s   00:00    
[root@centos60 hadoop]# scp hdfs-site.xml root@centos62:/usr/local/hadoop-3.1.2/etc/hadoop/hdfs-site.xml 
hdfs-site.xml                                                                                                                               100% 3393     3.6MB/s   00:00    
[root@centos60 hadoop]# scp hdfs-site.xml root@centos64:/usr/local/hadoop-3.1.2/etc/hadoop/hdfs-site.xml 
hdfs-site.xml                                                                                                                               100% 3393     1.3MB/s   00:00    

六、zookeeper的下載安裝:

下載地址:http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz(官方3.5版本的包有問題)

解壓安裝:

[root@centos60 tmp]# tar zxvf zookeeper-3.4.14.tar.gz -C /usr/local/
[root@centos60 tmp]# cd /usr/local/zookeeper-3.4.14/
[root@centos60 zookeeper-3.4.14]# ls
bin        dist-maven       lib          pom.xml               src                       zookeeper-3.4.14.jar.md5   zookeeper-contrib  zookeeper-jute
build.xml  ivysettings.xml  LICENSE.txt  README.md             zookeeper-3.4.14.jar      zookeeper-3.4.14.jar.sha1  zookeeper-docs     zookeeper-recipes
conf       ivy.xml          NOTICE.txt   README_packaging.txt  zookeeper-3.4.14.jar.asc  zookeeper-client           zookeeper-it       zookeeper-server
[root@centos60 zookeeper-3.4.14]# cd conf/
[root@centos60 conf]# ls
configuration.xsl  log4j.properties  zoo_sample.cfg
[root@centos60 conf]# cp zoo_sample.cfg  zoo.cfg
[root@centos60 conf]# vi zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/local/zookeeper-3.4.14/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=centos60:2888:3888
server.2=centos62:2888:3888
server.3=centos64:2888:3888

dataDir=/usr/local/zookeeper-3.4.14/data

末尾添加:

server.1=centos60:2888:3888
server.2=centos62:2888:3888
server.3=centos64:2888:3888

創建myid文件:

[root@centos60 conf]# cd ..
[root@centos60 zookeeper-3.4.14]# mkdir data
[root@centos60 zookeeper-3.4.14]# cd data/
[root@centos60 data]# touch myid
[root@centos60 data]# echo 1 >> myid 
[root@centos60 data]# cat myid 
1

scp到其它2節點,修改myid的值:

[root@centos60 local]# scp -r zookeeper-3.4.14 root@centos62:/usr/local/zookeeper-3.4.14/
[root@centos60 local]# scp -r zookeeper-3.4.14 root@centos64:/usr/local/zookeeper-3.4.14/

[root@centos62 ~]# cd /usr/local/zookeeper-3.4.14/data
[root@centos62 data]# vi myid 
2

[root@centos64 ~]# cd /usr/local/zookeeper-3.4.14/data/
[root@centos64 data]# vi myid
3

啓動zookeeper,3個節點都要執行:

[root@centos60 bin]# pwd
/usr/local/zookeeper-3.4.14/bin
[root@centos60 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@centos60 bin]# jps
172550 QuorumPeerMain
172838 Jps

報錯:

[root@centos60 bin]# ./zkServer.sh  status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
[root@centos60 bin]# cat zookeeper.out 
2019-07-29 20:23:42,049 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@584] - Cannot open channel to 3 at election address centos64/192.168.56.64:3888
java.net.ConnectException: Connection refused (Connection refused)
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:610)
        at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:838)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:958)

發現是60防火牆的iptables沒有關閉,關閉防火牆:

[root@centos60 bin]# systemctl stop firewalld.service
[root@centos60 bin]# sudo service iptables stop
Redirecting to /bin/systemctl stop iptables.service
[root@centos60 bin]# ./zkServer.sh  status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

正常情況下有一臺節點會是leader,其他2節點是follower。

七、 hdfs的啓動,第一次啓動比較繁瑣,之後直接start-all就好了。
7.1 在ZooKeeper中初始化所需的狀態。在一個NameNode主機下運行以下命令(這將在ZooKeeper中創建一個znode,其中自動故障轉移系統存儲其數據):

[root@centos60 bin]# hdfs zkfc -formatZK

7.2 在每個journalnode節點用如下命令啓動journalnode:

[root@centos60 bin]# hdfs --daemon start journalnode
[root@centos60 bin]# jps
420150 Jps
367814 QuorumPeerMain
420125 JournalNode

7.3 在第一個namenode節點下格式化namenode和journalnode目錄:

[root@centos60 bin]# hdfs namenode -format

重要!然後將生成的namenode文件夾copy到第二個namenode節點(開始部署好久都失敗最後發現是這個問題):

[root@centos60 hadoop]# pwd
/hadoop
[root@centos60 hadoop]# ls
journalnode  namenode
[root@centos60 hadoop]# scp -r namenode/ root@centos62:/hadoop/

7.4 在主namenode節點執行start-dfs.sh(或者 start-all.sh,start-all.sh會將NodeManage、ResourceManage都啓動起來):

[root@centos60 bin]# start-dfs.sh

報錯:

Starting namenodes on [centos60 centos62]
Last login: Tue Jul 30 14:52:36 CST 2019 from 192.168.56.133 on pts/5
Starting datanodes
Last login: Tue Jul 30 15:40:01 CST 2019 on pts/4
Starting journal nodes [centos60 centos62 centos64]
ERROR: Attempting to operate on hdfs journalnode as root
ERROR: but there is no HDFS_JOURNALNODE_USER defined. Aborting operation.
Starting ZK Failover Controllers on NN hosts [centos60 centos62]
ERROR: Attempting to operate on hdfs zkfc as root
ERROR: but there is no HDFS_ZKFC_USER defined. Aborting operation.
Starting resourcemanager
Last login: Tue Jul 30 15:40:04 CST 2019 on pts/4
Starting nodemanagers
Last login: Tue Jul 30 15:40:11 CST 2019 on pts/4

解決:

[root@centos60 sbin]# cd /usr/local/hadoop-3.1.2/sbin/
[root@centos60 sbin]# pwd
/usr/local/hadoop-3.1.2/sbin
[root@centos60 sbin]# vi start-dfs.sh 
[root@centos60 sbin]# vi stop-dfs.sh

start-dfs.sh、stop-dfs.sh文件開頭添加以下配置:

HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_ZKFC_USER=root

然後將修改後的文件scp到另外2節點,重啓集羣,啓動成功:

[root@centos60 sbin]# stop-dfs.sh
[root@centos60 hadoop]# start-dfs.sh
[root@centos60 hadoop]# jps
21062 JournalNode
202677 DataNode
205173 DFSZKFailoverController
206346 Jps
13196 QuorumPeerMain
201277 NameNode

[root@centos62 hadoop]# jps
5777 JournalNode
17763 DFSZKFailoverController
2007 QuorumPeerMain
17544 DataNode
17358 NameNode
17822 Jps

[root@centos64 hadoop]# jps
32128 Jps
12039 QuorumPeerMain
25385 DataNode
13596 JournalNode

7.5 如果是運行中的集羣,hdfs從非HA模式修改爲HA模式,需要如下操作:

#在備namenode節點執行如下命令,格式化並複製主節點的元數據
$HADOOP_HOME/bin/hdfs namenode -bootstrapStandby

#在主節點執行如下命令,初始化JournalNodes的edit數據
$HADOOP_HOME/bin/hdfs namenode -initializeSharedEdits

#然後在備節點上啓動namenode
$HADOOP_HOME/bin/hdfs --daemon start namenode

八、驗證HA

8.1 查看NameNode的狀態,應該是一個active,一個standby:

[root@centos60 namenode]# hdfs haadmin -getAllServiceState
centos60:9000                                      active    
centos62:9000                                      standby

8.2 模擬故障:

[root@centos60 hadoop]# jps
262694 Jps
21062 JournalNode
202677 DataNode
205173 DFSZKFailoverController
13196 QuorumPeerMain
201277 NameNode
[root@centos60 hadoop]# kill -9  201277

8.3 等待一會,再次查看節點狀態:

[root@centos60 hadoop]# hdfs haadmin -getAllServiceState
2019-07-30 21:06:29,298 INFO ipc.Client: Retrying connect to server: centos60/192.168.56.60:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
centos60:9000                                      Failed to connect: Call From centos60/192.168.56.60 to centos60:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
centos62:9000                                      active

補充:

製作目錄
hdfs dfs -mkdir  /input
將文件放在目錄中
hdfs dfs -put /file.txt  /input
檢查目錄中的文件
hdfs dfs -ls  /input

standby namenode 啓動失敗解決:https://blog.csdn.net/yzh_1346983557/article/details/97812820

參考:https://blog.csdn.net/shshheyi/article/details/84893371

https://blog.csdn.net/zhanglong_4444/article/details/87699369

https://blog.csdn.net/hliq5399/article/details/78193113

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章