hadoop-2.4.1 集羣搭建

hadoop2.0已經發布了穩定版本了,增加了很多特性,比如HDFS HA、YARN等。最新的hadoop-2.4.1又增加了YARN HA

注意:apache提供的hadoop-2.4.1的安裝包是在32位操作系統編譯的,因爲hadoop依賴一些C++的本地庫,
所以如果在64位的操作上安裝hadoop-2.4.1就需要重新在64操作系統上重新編譯

集羣搭建步驟:
1.修改Linux主機名
2.修改IP
3.修改主機名和IP的映射關係
    ######注意######如果你們公司是租用的服務器或是使用的雲主機(如華爲用主機、阿里雲主機等)
    /etc/hosts裏面要配置的是內網IP地址和主機名的映射關係    
4.關閉防火牆
5.ssh免登陸
6.安裝JDK,配置環境變量等

集羣規劃:
    主機名        IP                安裝的軟件                                 運行的進程
    hadoop01    192.168.1.201    jdk、hadoop                    NameNode、DFSZKFailoverController(zkfc)
    hadoop02    192.168.1.202    jdk、hadoop                    NameNode、DFSZKFailoverController(zkfc)
    hadoop03    192.168.1.203    jdk、hadoop                    ResourceManager
    hadoop04    192.168.1.204    jdk、hadoop                    ResourceManager
    hadoop05    192.168.1.205    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain
    hadoop06    192.168.1.206    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain
    hadoop07    192.168.1.207    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain
    
說明:
    1.在hadoop2.0中通常由兩個NameNode組成,一個處於active狀態,另一個處於standby狀態。Active NameNode對外提供服務,而

Standby NameNode則不對外提供服務,僅同步active namenode的狀態,以便能夠在它失敗時快速進行切換。
    hadoop2.0官方提供了兩種HDFS HA的解決方案,一種是NFS,另一種是QJM。這裏我們使用簡單的QJM。在該方案中,主備NameNode之間

通過一組JournalNode同步元數據信息,一條數據只要成功寫入多數JournalNode即認爲寫入成功。通常配置奇數個JournalNode
    這裏還配置了一個zookeeper集羣,用於ZKFC(DFSZKFailoverController)故障轉移,當Active NameNode掛掉了,會自動切換

Standby NameNode爲standby狀態
    2.hadoop-2.2.0中依然存在一個問題,就是ResourceManager只有一個,存在單點故障,hadoop-2.4.1解決了這個問題,有兩個
ResourceManager,一個是Active,一個是Standby,狀態由zookeeper進行協調
安裝步驟:
    1.安裝配置zooekeeper集羣(在hadoop05上)
        1.1解壓
            tar -zxvf zookeeper-3.4.5.tar.gz -C /hadoop/
        1.2修改配置
            cd /hadoop/zookeeper-3.4.5/conf/
            cp zoo_sample.cfg zoo.cfg
            vim zoo.cfg
            修改:dataDir=/hadoop/zookeeper-3.4.5/tmp
            在最後添加:
            server.1=hadoop05:2888:3888
            server.2=hadoop06:2888:3888
            server.3=hadoop07:2888:3888
            保存退出
            然後創建一個tmp文件夾
            mkdir /hadoop/zookeeper-3.4.5/tmp
            再創建一個空文件
            touch /hadoop/zookeeper-3.4.5/tmp/myid
            最後向該文件寫入ID
            echo 1 > /hadoop/zookeeper-3.4.5/tmp/myid
        1.3將配置好的zookeeper拷貝到其他節點(首先分別在hadoop06、hadoop07根目錄下創建一個hadoop目錄:mkdir /hadoop)
            scp -r /hadoop/zookeeper-3.4.5/ hadoop06:/hadoop/
            scp -r /hadoop/zookeeper-3.4.5/ hadoop07:/hadoop/
            
            注意:修改hadoop06、hadoop07對應/hadoop/zookeeper-3.4.5/tmp/myid內容
            hadoop06:
                echo 2 > /hadoop/zookeeper-3.4.5/tmp/myid
            hadoop07:
                echo 3 > /hadoop/zookeeper-3.4.5/tmp/myid
    
    2.安裝配置hadoop集羣(在hadoop01上操作)
        2.1解壓
            tar -zxvf hadoop-2.4.1.tar.gz -C /hadoop/
        2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目錄下)
            #將hadoop添加到環境變量中
            vim /etc/profile
            export JAVA_HOME=/usr/java/jdk1.7.0_55
            export HADOOP_HOME=/hadoop/hadoop-2.4.1
            export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
            
            #hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下
            cd /hadoop/hadoop-2.4.1/etc/hadoop
            
            2.2.1修改hadoo-env.sh
                export JAVA_HOME=/usr/java/jdk1.7.0_55
                
            2.2.2修改core-site.xml
                <configuration>
                    <!-- 指定hdfs的nameservice爲ns1 -->
                    <property>
                        <name>fs.defaultFS</name>
                        <value>hdfs://ns1</value>
                    </property>
                    <!-- 指定hadoop臨時目錄 -->
                    <property>
                        <name>hadoop.tmp.dir</name>
                        <value>/hadoop/hadoop-2.4.1/tmp</value>
                    </property>
                    <!-- 指定zookeeper地址 -->
                    <property>
                        <name>ha.zookeeper.quorum</name>
                        <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value>
                    </property>
                </configuration>
                
            2.2.3修改hdfs-site.xml
                <configuration>
                    <!--指定hdfs的nameservice爲ns1,需要和core-site.xml中的保持一致 -->
                    <property>
                        <name>dfs.nameservices</name>
                        <value>ns1</value>
                    </property>
                    <!-- ns1下面有兩個NameNode,分別是nn1,nn2 -->
                    <property>
                        <name>dfs.ha.namenodes.ns1</name>
                        <value>nn1,nn2</value>
                    </property>
                    <!-- nn1的RPC通信地址 -->
                    <property>
                        <name>dfs.namenode.rpc-address.ns1.nn1</name>
                        <value>hadoop01:9000</value>
                    </property>
                    <!-- nn1的http通信地址 -->
                    <property>
                        <name>dfs.namenode.http-address.ns1.nn1</name>
                        <value>hadoop01:50070</value>
                    </property>
                    <!-- nn2的RPC通信地址 -->
                    <property>
                        <name>dfs.namenode.rpc-address.ns1.nn2</name>
                        <value>hadoop02:9000</value>
                    </property>
                    <!-- nn2的http通信地址 -->
                    <property>
                        <name>dfs.namenode.http-address.ns1.nn2</name>
                        <value>hadoop02:50070</value>
                    </property>
                    <!-- 指定NameNode的元數據在JournalNode上的存放位置 -->
                    <property>
                        <name>dfs.namenode.shared.edits.dir</name>
                        <value>qjournal://hadoop05:8485;hadoop06:8485;hadoop07:8485/ns1</value>
                    </property>
                    <!-- 指定JournalNode在本地磁盤存放數據的位置 -->
                    <property>
                        <name>dfs.journalnode.edits.dir</name>
                        <value>/hadoop/hadoop-2.4.1/journal</value>
                    </property>
                    <!-- 開啓NameNode失敗自動切換 -->
                    <property>
                        <name>dfs.ha.automatic-failover.enabled</name>
                        <value>true</value>
                    </property>
                    <!-- 配置失敗自動切換實現方式 -->
                    <property>
                        <name>dfs.client.failover.proxy.provider.ns1</name>
                        

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!-- 配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行-->
                    <property>
                        <name>dfs.ha.fencing.methods</name>
                        <value>
                            sshfence
                            shell(/bin/true)
                        </value>
                    </property>
                    <!-- 使用sshfence隔離機制時需要ssh免登陸 -->
                    <property>
                        <name>dfs.ha.fencing.ssh.private-key-files</name>
                        <value>/home/hadoop/.ssh/id_rsa</value>
                    </property>
                    <!-- 配置sshfence隔離機制超時時間 -->
                    <property>
                        <name>dfs.ha.fencing.ssh.connect-timeout</name>
                        <value>30000</value>
                    </property>
                </configuration>
            
            2.2.4修改mapred-site.xml
                <configuration>
                    <!-- 指定mr框架爲yarn方式 -->
                    <property>
                        <name>mapreduce.framework.name</name>
                        <value>yarn</value>
                    </property>
                </configuration>    
            
            2.2.5修改yarn-site.xml
                <configuration>
                        <!-- 開啓RM高可靠 -->
                        <property>
                           <name>yarn.resourcemanager.ha.enabled</name>
                           <value>true</value>
                        </property>
                        <!-- 指定RM的cluster id -->
                        <property>
                           <name>yarn.resourcemanager.cluster-id</name>
                           <value>yrc</value>
                        </property>
                        <!-- 指定RM的名字 -->
                        <property>
                           <name>yarn.resourcemanager.ha.rm-ids</name>
                           <value>rm1,rm2</value>
                        </property>
                        <!-- 分別指定RM的地址 -->
                        <property>
                           <name>yarn.resourcemanager.hostname.rm1</name>
                           <value>hadoop03</value>
                        </property>
                        <property>
                           <name>yarn.resourcemanager.hostname.rm2</name>
                           <value>hadoop04</value>
                        </property>
                        <!-- 指定zk集羣地址 -->
                        <property>
                           <name>yarn.resourcemanager.zk-address</name>
                           <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value>
                        </property>
                        <property>
                           <name>yarn.nodemanager.aux-services</name>
                           <value>mapreduce_shuffle</value>
                        </property>
                </configuration>
            
                
            2.2.6修改slaves(slaves是指定子節點的位置,因爲要在hadoop01上啓動HDFS、在hadoop03啓動yarn,所以

hadoop01上的slaves文件指定的是datanode的位置,hadoop03上的slaves文件指定的是nodemanager的位置)
                hadoop05
                hadoop06
                hadoop07

            2.2.7配置免密碼登陸
                #首先要配置hadoop01到hadoop02、hadoop03、hadoop04、hadoop05、hadoop06、hadoop07的免密碼登陸
                #在hadoop01上生產一對鑰匙
                ssh-keygen -t rsa
                #將公鑰拷貝到其他節點,包括自己
                ssh-coyp-id hadoop01
                ssh-coyp-id hadoop02
                ssh-coyp-id hadoop03
                ssh-coyp-id hadoop04
                ssh-coyp-id hadoop05
                ssh-coyp-id hadoop06
                ssh-coyp-id hadoop07
                #配置hadoop03到hadoop04、hadoop05、hadoop06、hadoop07的免密碼登陸
                #在hadoop03上生產一對鑰匙
                ssh-keygen -t rsa
                #將公鑰拷貝到其他節點
                ssh-coyp-id hadoop04
                ssh-coyp-id hadoop05
                ssh-coyp-id hadoop06
                ssh-coyp-id hadoop07
                #注意:兩個namenode之間要配置ssh免密碼登陸,別忘了配置hadoop02到hadoop01的免登陸
                在hadoop02上生產一對鑰匙
                ssh-keygen -t rsa
                ssh-coyp-id -i hadoop01                
        
        2.4將配置好的hadoop拷貝到其他節點
            scp -r /hadoop/ hadoop02:/
            scp -r /hadoop/ hadoop03:/
            scp -r /hadoop/hadoop-2.4.1/ root@hadoop04:/hadoop/
            scp -r /hadoop/hadoop-2.4.1/ root@hadoop05:/hadoop/
            scp -r /hadoop/hadoop-2.4.1/ root@hadoop06:/hadoop/
            scp -r /hadoop/hadoop-2.4.1/ root@hadoop07:/hadoop/
        ###注意:嚴格按照下面的步驟
        2.5啓動zookeeper集羣(分別在hadoop05、hadoop06、tcast07上啓動zk)
            cd /hadoop/zookeeper-3.4.5/bin/
            ./zkServer.sh start
            #查看狀態:一個leader,兩個follower
            ./zkServer.sh status
            
        2.6啓動journalnode(分別在在hadoop05、hadoop06、hadoop07上執行)
            cd /hadoop/hadoop-2.4.1
            sbin/hadoop-daemon.sh start journalnode
            #運行jps命令檢驗,hadoop05、hadoop06、hadoop07上多了JournalNode進程
        
        2.7格式化HDFS
            #在hadoop01上執行命令:
            hdfs namenode -format
            #格式化後會在根據core-site.xml中的hadoop.tmp.dir配置生成個文件,這裏我配置的是/hadoop/hadoop-

2.4.1/tmp,然後將/hadoop/hadoop-2.4.1/tmp拷貝到hadoop02的/hadoop/hadoop-2.4.1/下。
            scp -r tmp/ hadoop02:/hadoop/hadoop-2.4.1/
        
        2.8格式化ZK(在hadoop01上執行即可)
            hdfs zkfc -formatZK
        
        2.9啓動HDFS(在hadoop01上執行)
            sbin/start-dfs.sh

        2.10啓動YARN(#####注意#####:是在hadoop03上執行start-yarn.sh,把namenode和resourcemanager分開是因爲性能問題

,因爲他們都要佔用大量資源,所以把他們分開了,他們分開了就要分別在不同的機器上啓動)
            sbin/start-yarn.sh

        
    到此,hadoop-2.4.1配置完畢,可以統計瀏覽器訪問:
        http://192.168.1.201:50070
        NameNode 'hadoop01:9000' (active)
        http://192.168.1.202:50070
        NameNode 'hadoop02:9000' (standby)
    
    驗證HDFS HA
        首先向hdfs上傳一個文件
        hadoop fs -put /etc/profile /profile
        hadoop fs -ls /
        然後再kill掉active的NameNode
        kill -9 <pid of NN>
        通過瀏覽器訪問:http://192.168.1.202:50070
        NameNode 'hadoop02:9000' (active)
        這個時候hadoop02上的NameNode變成了active
        在執行命令:
        hadoop fs -ls /
        -rw-r--r--   3 root supergroup       1926 2014-02-06 15:36 /profile
        剛纔上傳的文件依然存在!!!
        手動啓動那個掛掉的NameNode
        sbin/hadoop-daemon.sh start namenode
        通過瀏覽器訪問:http://192.168.1.201:50070
        NameNode 'hadoop01:9000' (standby)
    
    驗證YARN:
        運行一下hadoop提供的demo中的WordCount程序:
        hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar wordcount /profile /out
    
    OK,大功告成!!!


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章