Hadoop HA高可用集羣搭建

集羣規劃:

         主機名              IP                                  安裝的軟件                                          運行的進程

wm01       192.168.1.201          jdk、hadoop             NameNode、DFSZKFailoverController(zkfc)

wm02       192.168.1.202          jdk、hadoop             NameNode、DFSZKFailoverController(zkfc)

wm03       192.168.1.203          jdk、hadoop             ResourceManager

wm04       192.168.1.204          jdk、hadoop             ResourceManager

wm05       192.168.1.205          jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain

wmd06     192.168.1.206          jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain

wm07       192.168.1.207          jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain

 

HA集羣搭建

1、  centos網絡設置

http://blog.csdn.net/w0823m/article/details/55102138

2、  centos普通用戶擁有sudo執行權限

http://blog.csdn.net/w0823m/article/details/55102332

3、  桌面版centos設置爲server啓動

http://blog.csdn.net/w0823m/article/details/55102475

4、  Centos修改主機名

http://blog.csdn.net/w0823m/article/details/55102813

5、  修改主機名和IP的映射關係

vim /etc/hosts

192.168.1.101           wm01

6、  關閉防火牆

           #查看防火牆狀態

           service iptables status

           #關閉防火牆

           service iptables stop

           #查看防火牆開機啓動狀態

           chkconfig iptables --list

           #關閉防火牆開機啓動

           chkconfig iptables off

7、  上傳安裝文件到服務器

8、  安裝jdk

http://blog.csdn.net/w0823m/article/details/52599911

 

 

說明:

         1.在hadoop2.0中通常由兩個NameNode組成,一個處於active狀態,另一個處於standby狀態。ActiveNameNode對外提供服務,而Standby NameNode則不對外提供服務,僅同步active namenode的狀態,以便能夠在它失敗時快速進行切換。

         hadoop2.0官方提供了兩種HDFSHA的解決方案,一種是NFS,另一種是QJM。這裏我們使用簡單的QJM。在該方案中,主備NameNode之間通過一組JournalNode同步元數據信息,一條數據只要成功寫入多數JournalNode即認爲寫入成功。通常配置奇數個JournalNode

         這裏還配置了一個zookeeper集羣,用於ZKFC(DFSZKFailoverController)故障轉移,當ActiveNameNode掛掉了,會自動切換Standby NameNode爲standby狀態

         2.hadoop-2.2.0中依然存在一個問題,就是ResourceManager只有一個,存在單點故障,hadoop-2.4.1解決了這個問題,有兩個ResourceManager,一個是Active,一個是Standby,狀態由zookeeper進行協調

安裝步驟:

         1.安裝配置zooekeeper集羣(在wm05上)

                   1.1解壓

                            tar-zxvf zookeeper-3.4.5.tar.gz -C /wm/

                   1.2修改配置

                            cd/wm/zookeeper-3.4.5/conf/

                            cpzoo_sample.cfg zoo.cfg

                            vimzoo.cfg

                            修改:dataDir=/wm/zookeeper-3.4.5/tmp

                            在最後添加:

                            server.1=wm05:2888:3888

                            server.2=wm06:2888:3888

                            server.3=wm07:2888:3888

                            保存退出

                            然後創建一個tmp文件夾

                            mkdir/wm/zookeeper-3.4.5/tmp

                            再創建一個空文件

                            touch/wm/zookeeper-3.4.5/tmp/myid

                            最後向該文件寫入ID

                            echo1 > /wm/zookeeper-3.4.5/tmp/myid

                   1.3將配置好的zookeeper拷貝到其他節點(首先分別在wm06、wm07根目錄下創建一個wm目錄:mkdir /wm)

                            scp-r /wm/zookeeper-3.4.5/ wm06:/wm/

                            scp-r /wm/zookeeper-3.4.5/ wm07:/wm/

                           

                            注意:修改wm06、wm07對應/wm/zookeeper-3.4.5/tmp/myid內容

                            wm06:

                                     echo2 > /wm/zookeeper-3.4.5/tmp/myid

                            wm07:

                                     echo3 > /wm/zookeeper-3.4.5/tmp/myid

        

         2.安裝配置hadoop集羣(在wm01上操作)

                   2.1解壓

                            tar-zxvf hadoop-2.4.1.tar.gz -C /wm/

                   2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目錄下)

                            #將hadoop添加到環境變量中

                            vim/etc/profile

                            exportJAVA_HOME=/usr/java/jdk1.7.0_55

                            exportHADOOP_HOME=/wm/hadoop-2.4.1

                            exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

                           

                            #hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下

                            cd/home/hadoop/app/hadoop-2.4.1/etc/hadoop

                           

                            2.2.1修改hadoo-env.sh

                                     exportJAVA_HOME=/home/hadoop/app/jdk1.7.0_55

                                    

                            2.2.2修改core-site.xml

                                     <configuration>

                                               <!--指定hdfs的nameservice爲ns1 -->

                                               <property>

                                                        <name>fs.defaultFS</name>

                                                        <value>hdfs://ns1/</value>

                                               </property>

                                               <!--指定hadoop臨時目錄 -->

                                               <property>

                                                        <name>hadoop.tmp.dir</name>

                                                        <value>/home/hadoop/app/hadoop-2.4.1/tmp</value>

                                               </property>

                                              

                                               <!-- 指定zookeeper地址 -->

                                               <property>

                                                        <name>ha.zookeeper.quorum</name>

                                                        <value>wm05:2181,wm06:2181,wm07:2181</value>

                                               </property>

                                     </configuration>

                                    

                            2.2.3修改hdfs-site.xml

                                     <configuration>

                                               <!--指定hdfs的nameservice爲ns1,需要和core-site.xml中的保持一致 -->

                                               <property>

                                                        <name>dfs.nameservices</name>

                                                        <value>ns1</value>

                                               </property>

                                               <!--ns1下面有兩個NameNode,分別是nn1,nn2 -->

                                               <property>

                                                        <name>dfs.ha.namenodes.ns1</name>

                                                        <value>nn1,nn2</value>

                                               </property>

                                               <!--nn1的RPC通信地址 -->

                                               <property>

                                                        <name>dfs.namenode.rpc-address.ns1.nn1</name>

                                                        <value>wm01:9000</value>

                                               </property>

                                               <!--nn1的http通信地址 -->

                                               <property>

                                                        <name>dfs.namenode.http-address.ns1.nn1</name>

                                                        <value>wm01:50070</value>

                                               </property>

                                               <!--nn2的RPC通信地址 -->

                                               <property>

                                                        <name>dfs.namenode.rpc-address.ns1.nn2</name>

                                                        <value>wm02:9000</value>

                                               </property>

                                               <!--nn2的http通信地址 -->

                                               <property>

                                                        <name>dfs.namenode.http-address.ns1.nn2</name>

                                                        <value>wm02:50070</value>

                                               </property>

                                               <!--指定NameNode的元數據在JournalNode上的存放位置 -->

                                               <property>

                                                        <name>dfs.namenode.shared.edits.dir</name>

                                                        <value>qjournal://wm05:8485;wm06:8485;wm07:8485/ns1</value>

                                               </property>

                                               <!--指定JournalNode在本地磁盤存放數據的位置 -->

                                               <property>

                                                        <name>dfs.journalnode.edits.dir</name>

                                                        <value>/home/hadoop/app/hadoop-2.4.1/journaldata</value>

                                               </property>

                                               <!--開啓NameNode失敗自動切換 -->

                                               <property>

                                                        <name>dfs.ha.automatic-failover.enabled</name>

                                                        <value>true</value>

                                               </property>

                                               <!--配置失敗自動切換實現方式 -->

                                               <property>

                                                        <name>dfs.client.failover.proxy.provider.ns1</name>

                                                        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

                                               </property>

                                               <!--配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行-->

                                               <property>

                                                        <name>dfs.ha.fencing.methods</name>

                                                        <value>

                                                                 sshfence

                                                                 shell(/bin/true)

                                                        </value>

                                               </property>

                                               <!--使用sshfence隔離機制時需要ssh免登陸 -->

                                               <property>

                                                        <name>dfs.ha.fencing.ssh.private-key-files</name>

                                                        <value>/home/hadoop/.ssh/id_rsa</value>

                                               </property>

                                               <!--配置sshfence隔離機制超時時間 -->

                                               <property>

                                                        <name>dfs.ha.fencing.ssh.connect-timeout</name>

                                                        <value>30000</value>

                                               </property>

                                     </configuration>

                           

                            2.2.4修改mapred-site.xml

                                     <configuration>

                                               <!--指定mr框架爲yarn方式 -->

                                               <property>

                                                        <name>mapreduce.framework.name</name>

                                                        <value>yarn</value>

                                               </property>

                                     </configuration>     

                           

                            2.2.5修改yarn-site.xml

                                     <configuration>

                                                        <!--開啓RM高可用 -->

                                                        <property>

                                                          <name>yarn.resourcemanager.ha.enabled</name>

                                                           <value>true</value>

                                                        </property>

                                                        <!--指定RM的cluster id -->

                                                        <property>

                                                          <name>yarn.resourcemanager.cluster-id</name>

                                                           <value>yrc</value>

                                                        </property>

                                                        <!--指定RM的名字 -->

                                                        <property>

                                                          <name>yarn.resourcemanager.ha.rm-ids</name>

                                                           <value>rm1,rm2</value>

                                                        </property>

                                                        <!--分別指定RM的地址 -->

                                                        <property>

                                                          <name>yarn.resourcemanager.hostname.rm1</name>

                                                           <value>wm03</value>

                                                        </property>

                                                        <property>

                                                           <name>yarn.resourcemanager.hostname.rm2</name>

                                                           <value>wm04</value>

                                                        </property>

                                                        <!--指定zk集羣地址 -->

                                                        <property>

                                                          <name>yarn.resourcemanager.zk-address</name>

                                                          <value>wm05:2181,wm06:2181,wm07:2181</value>

                                                        </property>

                                                        <property>

                                                           <name>yarn.nodemanager.aux-services</name>

                                                           <value>mapreduce_shuffle</value>

                                                        </property>

                                     </configuration>

                           

                                    

                            2.2.6修改slaves(slaves是指定子節點的位置,因爲要在wm01上啓動HDFS、在wm03啓動yarn,所以wm01上的slaves文件指定的是datanode的位置,wm03上的slaves文件指定的是nodemanager的位置)

                                     wm05

                                     wm06

                                     wm07

 

                            2.2.7配置免密碼登陸

                                     #首先要配置wm01到wm02、wm03、wm04、wm05、wm06、wm07的免密碼登陸

                                     #在wm01上生產一對鑰匙

                                     ssh-keygen-t rsa

                                     #將公鑰拷貝到其他節點,包括自己

                                     ssh-coyp-idwm01

                                     ssh-coyp-idwm02

                                     ssh-coyp-idwm03

                                     ssh-coyp-idwm04

                                     ssh-coyp-idwm05

                                     ssh-coyp-idwm06

                                     ssh-coyp-idwm07

                                     #配置wm03到wm04、wm05、wm06、wm07的免密碼登陸

                                     #在wm03上生產一對鑰匙

                                     ssh-keygen-t rsa

                                     #將公鑰拷貝到其他節點

                                     ssh-coyp-idwm04

                                     ssh-coyp-idwm05

                                     ssh-coyp-idwm06

                                     ssh-coyp-idwm07

                                     #注意:兩個namenode之間要配置ssh免密碼登陸,別忘了配置wm02到wm01的免登陸

                                     在wm02上生產一對鑰匙

                                     ssh-keygen-t rsa

                                     ssh-coyp-id-i wm01                           

                  

                   2.4將配置好的hadoop拷貝到其他節點

                            scp-r /wm/ wm02:/

                            scp-r /wm/ wm03:/

                            scp-r /wm/hadoop-2.4.1/ hadoop@wm04:/wm/

                            scp-r /wm/hadoop-2.4.1/ hadoop@wm05:/wm/

                            scp-r /wm/hadoop-2.4.1/ hadoop@wm06:/wm/

                            scp-r /wm/hadoop-2.4.1/ hadoop@wm07:/wm/

                   ###注意:嚴格按照下面的步驟

                   2.5啓動zookeeper集羣(分別在wm05、wm06、tcast07上啓動zk)

                            cd/wm/zookeeper-3.4.5/bin/

                            ./zkServer.shstart

                            #查看狀態:一個leader,兩個follower

                            ./zkServer.shstatus

                           

                   2.6啓動journalnode(分別在在wm05、wm06、tcast07上執行)

                            cd/wm/hadoop-2.4.1

                            sbin/hadoop-daemon.shstart journalnode

                            #運行jps命令檢驗,wm05、wm06、wm07上多了JournalNode進程

                  

                   2.7格式化HDFS

                            #在wm01上執行命令:

                            hdfsnamenode -format

                            #格式化後會在根據core-site.xml中的hadoop.tmp.dir配置生成個文件,這裏我配置的是/wm/hadoop-2.4.1/tmp,然後將/wm/hadoop-2.4.1/tmp拷貝到wm02的/wm/hadoop-2.4.1/下。

                            scp-r tmp/ wm02:/home/hadoop/app/hadoop-2.4.1/

                            ##也可以這樣,建議hdfsnamenode -bootstrapStandby

                  

                   2.8格式化ZKFC(在wm01上執行即可)

                            hdfszkfc -formatZK

                  

                   2.9啓動HDFS(在wm01上執行)

                            sbin/start-dfs.sh

 

                   2.10啓動YARN(#####注意#####:是在wm03上執行start-yarn.sh,把namenode和resourcemanager分開是因爲性能問題,因爲他們都要佔用大量資源,所以把他們分開了,他們分開了就要分別在不同的機器上啓動)

                            sbin/start-yarn.sh

 

                  

         到此,hadoop-2.4.1配置完畢,可以統計瀏覽器訪問:

                   http://192.168.1.201:50070

                   NameNode'wm01:9000' (active)

                   http://192.168.1.202:50070

                   NameNode'wm02:9000' (standby)

        

         驗證HDFS HA

                   首先向hdfs上傳一個文件

                   hadoopfs -put /etc/profile /profile

                   hadoopfs -ls /

                   然後再kill掉active的NameNode

                   kill-9 <pid of NN>

                   通過瀏覽器訪問:http://192.168.1.202:50070

                   NameNode'wm02:9000' (active)

                   這個時候wm02上的NameNode變成了active

                   在執行命令:

                   hadoopfs -ls /

                   -rw-r--r--   3 root supergroup       1926 2014-02-06 15:36 /profile

                   剛纔上傳的文件依然存在!!!

                   手動啓動那個掛掉的NameNode

                   sbin/hadoop-daemon.shstart namenode

                   通過瀏覽器訪問:http://192.168.1.201:50070

                   NameNode'wm01:9000' (standby)

        

         驗證YARN:

                   運行一下hadoop提供的demo中的WordCount程序:

                   hadoopjar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar wordcount/profile /out

        

         OK,大功告成!!!

發佈了23 篇原創文章 · 獲贊 9 · 訪問量 5萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章