hadoop的HA安裝配置

1 安裝配置zookeeper

1、https://archive.apache.org/dist/zookeeper/
2、上傳壓縮文件到集羣 拖之
3、解壓 tar -zxvf xxx.tar.gz -C /path
4、修改配置文件
進入conf目錄:cd ZKHOME/conf

```
> mv zoo_sample.cfg zoo.cfg -- 改名
> vim zoo.cfg
	dataDir=/apps/zkdata
	server.1=kk-01:2888:3888
	server.2=kk-02:2888:3888
	server.3=kk-03:2888:3888
	
> mkdir -p /apps/zkdata
> echo 1 > /apps/zkdata/myid ## kk-01
> scp -r zookeeper-3.4.11/ kk-{01,02,03}:$PWD
> echo 2 > zkdata/myid  ##kk-02
> echo 3 > zkdata/myid  ##kk-03
```

5、配置環境變量
export ZOOKEEPER_HOME=/apps/zookeeper-3.4.11/ export PATH=$PATH:$ZOOKEEPER_HOME/bin
source /ect/profile

6、啓動zk
zkServer.sh {start|stop|restart|status}

2 hadoop配置

Core-site.xml

<configuration>
<property>
                <name>fs.defaultFS</name>
                <value>hdfs://bigdata01:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/opt/components/data/hadoop2.7.3_data/tmp</value>
        </property>
        <property>
                <name>io.file.buffer.size</name>
                <value>131702</value>
        </property>

<!-- HA -->
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/opt/components/data/hadoop2.7.3_data/journaldata</value>
</property>

<property>
   <name>ha.zookeeper.quorum</name>
   <value>bigdata01:2181,bigdata02:2181,bigdata03:2181</value>
 </property>

</configuration>

Hdfs-site.xml

<configuration>
   <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>bigdata02:9001</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>

<!-- HA -->
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>bigdata01:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>bigdata02:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>bigdata01:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>bigdata02:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://bigdata01:8485;bigdata02:8485;bigdata03:8485/mycluster</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
    </property>

    <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/root/.ssh/id_rsa</value>
    </property>

<property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
 </property>


</configuration>

Mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>bigdata01:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>bigdata01:19888</value>
        </property>

</configuration>

Yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>bigdata01:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>bigdata01:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>bigdata01:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>bigdata01:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>bigdata01:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>1536</value>
    </property>

<!-- HA -->
<property>
  <name>yarn.resourcemanager.ha.enabled</name>
  <value>true</value>
</property>
<property>
  <name>yarn.resourcemanager.cluster-id</name>
  <value>cluster1</value>
</property>
<property>
  <name>yarn.resourcemanager.ha.rm-ids</name>
  <value>rm1,rm2</value>
</property>
<property>
  <name>yarn.resourcemanager.hostname.rm1</name>
  <value>bigdata01</value>
</property>
<property>
  <name>yarn.resourcemanager.hostname.rm2</name>
  <value>bigdata02</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address.rm1</name>
  <value>bigdata01:8088</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address.rm2</name>
  <value>bigdata02:8088</value>
</property>
<property>
  <name>yarn.resourcemanager.zk-address</name>
  <value>bigdata01:2181,bigdata02:2181,bigdata03:2181</value>
</property>

</configuration>

3 HA步驟

理論知識:
http://www.tuicool.com/articles/jameeqm
這篇文章講的非常詳細了:
http://www.tuicool.com/articles/jameeqm
以下是進階,講QJM工作原理:
http://www.tuicool.com/articles/eIBB3a
安裝配置HA
http://makaidong.com/tototuzuoquan/1/1002_10219241_2.htm

啓動順序
Zookeeper -> JournalNode -> 格式化NameNode -> 初始化JournalNode
-> 創建命名空間(zkfc) -> NameNode -> DataNode -> ResourceManager -> NodeManager。

首次啓動ha集羣過程:
hdfs zkfc -formatZK(這個之前落下了,很重要,如果不註冊到zookeeper,那麼等於hdfs和zookeeper沒產生任何關係)

1、啓動journalnode

sbin/hadoop-daemon.sh start journalnode 是每一臺journode機器

2、啓動namenode

1)格式化bin/hdfs namenode -format
2)啓動這個namenode : sbin/hadoop-daemon.sh start namenode
3)格式化另一臺namonode bin/hdfs namenode -bootstrapStandby :注意2-3步驟的順序,使用時,我犯了個錯誤,把順序顛倒了,結果,第二臺namenode的tem.dir目錄一直沒有任何文件。
4)啓動第二臺namenode:sbin/hadoop-daemon.sh start namenode

3、到了這一步對於新手來說有個陷阱。我們在學習的時候,都知道兩臺namenode一臺是active,一臺是standby。可是此刻,兩臺都是standby。

還以爲是出了問題,後來終於發現,這裏是需要【手動轉換】的!
bin/hdfs haadmin -transitionToActive nn1
此時,可以通過之前配置的http地址訪問集羣了。
http://master:50070
tip:關閉防火牆:sudo ufw disable

4、啓動datanode

逐臺 sbin/hadoop-daemon start datanode

---------結束
記得zookeeper與hadoop要先同步。命令:hdfs zkfc -formatZK。
把非Ha集羣,轉換爲Ha集羣:(和上面的首次相比,只是步驟二由格式化變成了初始化)
1、啓動所有journalnode
sbin/hadoop-daemon start journalnode
2、在其中一臺namenode上對journalnode的共享數據進行初始化
bin/hdfs namenode -initializeShareEdits
3、啓動這臺namenode
sbin/hadoop-daemon start namenode
4、在第二臺namenode上同步:
bin/hdfs namenode -bootstrapStandby
5、啓動第二臺namenode
6、啓動所有的datanode
------------結束
一些常用的管理集羣的命令:
bin/hdfs haadmin -getServiceStae nn1
bin/hdfs haadmin -failover nn1 nn2
bin/hdfs haadmin transitionActive nn1(不常使用,因爲不會運行fence,無法關閉前一個namenode造成腦裂)
bin/hdfs haadmin transitionStandby nn2(不常使用,因爲不會運行fence,無法關閉前一個namenode造成腦裂)
bin/hdfs haadmin checkHealth nn2

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章