(六)使用Ansible搭建分佈式大數據基礎環境-Hadoop高可用集羣搭建

(六)使用Ansible搭建分佈式大數據基礎環境-Hadoop集羣模式搭建

終於到了我們的重頭戲——Hadoop分佈式集羣的搭建了。Hadoop做爲整個開源大數據解決方案的核心,是Hive/Hbase等技術的基石,
Spark/Flink等時下最熱門的技術也可以通過託管到Hadoop(實際上是Yarn)來實現分佈式。

整個Hadoop集羣搭建的PlayBook我們主要分兩個步驟:

  1. 下載Hadoop安裝包並解壓,完成環境變量配置
  2. 配置core-site.xml,hdfs-site.xml,salves,yarn-site.xml,mapred-site.xml並copy到Hadoop安裝目錄
  3. 運行ansible命令完成Hadoop集羣搭建,然後遠程SSH到namenode節點直接使用NameNode集羣上的start-dfs.sh/start-yarn.sh啓動整個集羣,並驗證集羣啓動成功

1. 下載並解壓,完成環境變量配置

tasks/main.yaml


---
# 創建DataNode數據存放目錄,data_base在外層group_vars裏面定義
- name: Create DataNode Data Directory
  file: path={{data_base}}/hdfs/datan``````ode state=directory
  
# 創建NameNode數據存放目錄
- name: Create NameNode Data Directory
  file: path={{data_base}}/hdfs/namenode state=directory

# 創建NameNode數據存放目錄
- name: Create JOURNAL Data Directory
  file: path={{data_base}}/jnode state=directory

# 開始下載,實際下載Mirror上下載地址格式:http://mirror.bit.edu.cn/apache/hadoop/core/hadoop-2.7.7/hadoop-2.7.7.tar.gz, {{filename}}等變量在當前role的vars/main.yaml定義
# 這裏有個tricky,就是get_url 的dest參數的值如果到具體文件,那麼如果當前機器上該文件存在,get_url沒有制定force=yes的情況下就不會重新下載,如果dest只是到目錄,那麼每次都會重新下載
- name: Download hadoop file
  get_url: url='{{download_server}}/{{path}}/{{filename}}-{{version}}/{{filename}}-{{version}}.{{suffix}}' dest="{{download_base}}/{{filename}}-{{version}}.{{suffix}}" remote_src=yes
  register: download_result

# 校驗下載文件是否成功創建
- name: Check whether registered
  stat:
    path: "{{app_base}}/{{filename}}-{{version}}"
  register: node_files

# debug調試輸出
- debug:
    msg: "{{node_files.stat.exists}}"

# 在下載成功,並且解壓縮目標目錄不存在的情況下解壓,防止解壓覆蓋
- name: Extract archive
  unarchive: dest={{app_base}} src='{{download_base}}/{{filename}}-{{version}}.{{suffix}}' remote_src=yes
  when: download_result.state == 'file' and node_files.stat.exists == False

# 創建軟鏈接
- name: Add soft link
  file: src="{{app_base}}/{{filename}}-{{version}}" dest={{app_base}}/{{filename}} state=link

# 添加HADOOP_HOME環境變量,下面幾條指令作用相通,become表示要切換到root執行
- name: Add ENV HADOOP_HOME
  become: yes
  lineinfile: dest=/etc/profile.d/app_bin.sh line="export HADOOP_HOME={{app_base}}/{{filename}}"

- name: Add ENV HADOOP_PREFIX
  become: yes
  lineinfile: dest=/profile.d/app_bin.sh line="export HADOOP_PREFIX=$HADOOP_HOME"

- name: Add ENV HADOOP_COMMON_HOME
  become: yes
  lineinfile: dest=/profile.d/app_bin.sh line="export HADOOP_HOME={{app_base}}/{{filename}}"

- name: Add ENV YARN_CONF_DIR
  become: yes
  lineinfile: dest=/profile.d/app_bin.sh line="export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop"

- name: Add ENV HADOOP_CONF_DIR
  become: yes
  lineinfile: dest=/profile.d/app_bin.sh line="export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop"

- name: Export PATH
  become: yes
  lineinfile: dest=/profile.d/app_bin.sh line="export PATH=$PATH:/$HADOOP_HOME/bin"

其中,使用到的hadoop相關變量都定義在當前roles/hadoop/vars/main.yaml中,只有Hadoop role使用到
var/main.yaml

path: hadoop/core
filename: hadoop
version: 2.7.7
suffix: tar.gz
appname: hadoop
# 以下是NameNode/DataNode啓動使用JVM參數
HADOOP_HEAPSIZE: 1024
HADOOP_NAMENODE_INIT_HEAPSIZE: 1024
HADOOP_NAMENODE_OPTS: -Xms1g -Xmx1g -XX:+UseG1GC -XX:MetaspaceSize=512m -XX:MaxMetaspaceSize=512m -XX:G1RSetUpdatingPauseTimePercent=5 -XX:InitiatingHeapOccupancyPercent=70 -XX:ParallelGCThreads=20 -XX:ConcGCThreads=20 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintPromotionFailure -XX:PrintFLSStatistics=1 -Xloggc:/data/bigdata/log/namenode-gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M
HADOOP_DATANODE_OPTS: -Xms1g -Xmx1g -XX:+UseG1GC -XX:MetaspaceSize=512m -XX:MaxMetaspaceSize=512m -XX:G1RSetUpdatingPauseTimePercent=5 -XX:InitiatingHeapOccupancyPercent=70 -XX:ParallelGCThreads=20 -XX:ConcGCThreads=20 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintPromotionFailure -XX:PrintFLSStatistics=1 -Xloggc:/data/bigdata/log/datanode-gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M
HADOOP_SECONDARYNAMENODE_OPTS: -Xms1g -Xmx1g -XX:+UseG1GC -XX:MetaspaceSize=512m -XX:MaxMetaspaceSize=512m -XX:G1RSetUpdatingPauseTimePercent=5 -XX:InitiatingHeapOccupancyPercent=70 -XX:ParallelGCThreads=20 -XX:ConcGCThreads=20 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintPromotionFailure -XX:PrintFLSStatistics=1 -Xloggc:/data/bigdata/log/secondarynamenode-gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M

2. 準備core-site.xml,yarn-site.xml,slaves,mapred-site.xml等配置文件,並copy到遠程所有主機的$HADOOP_HOME/etc/hadoop配置文件保存目錄下

1. templates/core-site.xml

core-site.xml是common組件的配置文件,測試集羣配置以下三個property即可

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://mycluster:8020</value>
    </property>

    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:///data/bigdata/data/hadoop</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>master1:2181,master2:2181,slave1:2181</value>
    </property>
</configuration>

2. templates/slaves

slaves用於告訴NameNode所有的DataNode有哪些,只需要配置在NameNode節點上,每個datanode的hostname/ip一行

master1
master2
slave1

3. templates/hdfs-site.xml

這裏,我們搭建給予HA的高可用HDFS集羣,需要配置以下property,具體property作用參看文檔

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///data/bigdata/data/hdfs/datanode</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///data/bigdata/data/hdfs/namenode</value>
    </property>
    <property>
        <name>dfs.nameservices</name>
        <value>mycluster</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.mycluster</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn1</name>
        <value>master1:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn2</name>
        <value>master2:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn1</name>
        <value>master1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn2</name>
        <value>master2:50070</value>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://master2:8485;slave1:8485;master1:8485/mycluster</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/bigdata/data/hdfs/jnode</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.mycluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
            sshfence
            shell(/bin/true)
        </value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
</configuration>

4. templates/yarn-site.xml

yarn-site.xml主要是HDFS集羣的配置文件,這裏我們搭建具備Active/Standy兩個NameNode的HA方案,提高整個集羣高可用,
注意參數"yarn.nodemanager.resource.memory-mb"的配置,我這裏VM是32G,所以我配置了24G給Hadoop集羣用,
具體配置多少需要結合你的機器實際內存打下來設置,如果設置超過主機最大內存數,DataNode可能啓動失敗。

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <!--啓用RM高可用-->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>

    <!--RM集羣標識符-->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>ns1</value>
    </property>


    <property>
        <!--指定兩臺RM主機名標識符-->
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>


    <!--RM故障自動切換-->
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.recover.enabled</name>
        <value>true</value>
    </property>


    <!--RM故障自動恢復-->

    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>


    <!--RM主機1-->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>master1</value>
    </property>

    <!--RM主機2-->
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>master2</value>
    </property>


    <!--RM狀態信息存儲方式,一種基於內存(MemStore),另一種基於ZK(ZKStore)-->
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value>
    </property>


    <!--使用ZK集羣保存狀態信息-->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>master1:2181,master1:2181,slave1:2181</value>
    </property>


    <!--向RM調度資源地址-->
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>master1:8030</value>
    </property>


    <property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>master2:8030</value>
    </property>


    <!--NodeManager通過該地址交換信息-->
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
        <value>master1:8031</value>
    </property>

    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
        <value>master2:8031</value>
    </property>


    <!--客戶端通過該地址向RM提交對應用程序操作-->
    <property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>master1:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>master2:8032</value>
    </property>


    <!--管理員通過該地址向RM發送管理命令-->
    <property>
        <name>yarn.resourcemanager.admin.address.rm1</name>
        <value>master1:8033</value>
    </property>

    <property>
        <name>yarn.resourcemanager.admin.address.rm2</name>
        <value>master2:8033</value>
    </property>


    <!--RM HTTP訪問地址,查看集羣信息-->
    <property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>master1:8088</value>
    </property>

    <property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>master2:8088</value>
    </property>


    <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    </property>

    <property>
        <name>yarn.nodemanager.hostname</name>
        <value>0.0.0.0</value>
    </property>

    <property>
        <name>yarn.nodemanager.bind-host</name>
        <value>0</value>
    </property>


    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

    <property>
        <description>Classpath for typical applications.</description>
        <name>yarn.application.classpath</name>
        <value>$HADOOP_CONF_DIR
            ,$HADOOP_COMMON_HOME/share/hadoop/common/*
            ,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*
            ,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*
            ,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*
            ,$YARN_HOME/share/hadoop/yarn/*
        </value>
    </property>

    <!-- Configurations for NodeManager -->
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>24576</value>
        <description>NodeManager可使用最大內存</description>
    </property>

    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>1024</value>
        <description>單個任務可申請的最少物理內存量</description>
    </property>

    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>16</value>
        <description>NodeManager可使用最大虛擬cpu core數</description>
    </property>

    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>5632</value>
        <description>單個任務可申請的最大物理內存量</description>
    </property>
</configuration>

5. templates/mapred-site.xml

mapred-site.xml是mapreduce任務所需配置文件

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>


    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobtracker.address</name>
        <value>master1:8021</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master1:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master1:19888</value>
    </property>
    <property>
        <name>mapred.max.maps.per.node</name>
        <value>2</value>
    </property>
    <property>
        <name>mapred.max.reduces.per.node</name>
        <value>1</value>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>1408</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx1126M</value>
    </property>

    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>2816</value>
    </property>
    <property>
        <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx2252M</value>
    </property>
    <property>
        <name>mapreduce.task.io.sort.mb</name>
        <value>512</value>
    </property>
    <property>
        <name>mapreduce.task.io.sort.factor</name>
        <value>100</value>
    </property>
</configuration>

6. tasks/main.yaml

所有配置文件都準備好了,接着編寫tasks/main.yaml,完成配置文件的替換和copy到遠程主機
tasks/main.yaml

- name: Copy core-site.xml
  template: src=core-site.xml dest="{{app_base}}/{{appname}}/etc/hadoop/core-site.xml" mode=0755

- name: Copy hdfs-site.xml
  template: src=hdfs-site.xml dest="{{app_base}}/{{appname}}/etc/hadoop/hdfs-site.xml" mode=0755

- name: Copy mapred-site.xml
  template: src=mapred-site.xml dest="{{app_base}}/{{appname}}/etc/hadoop/mapred-site.xml" mode=0755

- name: Copy yarn-site.xml
  template: src=yarn-site.xml dest="{{app_base}}/{{appname}}/etc/hadoop/yarn-site.xml" mode=0755

- name: Copy slaves
  template: src=slaves dest="{{app_base}}/{{appname}}/etc/hadoop/slaves" mode=0755

7. 修改遠程主機上$HADOOP_HOME/etc/hadoop/hadoop-env.sh,爲DataNode/NameNode設置合適啓動參數

這個腳本,我們直接採用lineinfile模塊來遠程修改,就不實用模板文件啦。
tasks/main.yaml

- name: Update ENV JAVA_HOME
  lineinfile: dest="{{app_base}}/{{appname}}/etc/hadoop/hadoop-env.sh" line="export JAVA_HOME={{JAVA_HOME}}"

- name: Update ENV HADOOP_HEAPSIZE
  lineinfile: dest="{{app_base}}/{{appname}}/etc/hadoop/hadoop-env.sh" line="export HADOOP_HEAPSIZE={{HADOOP_HEAPSIZE}}"

- name: Update ENV HADOOP_NAMENODE_INIT_HEAPSIZE
  lineinfile: dest="{{app_base}}/{{appname}}/etc/hadoop/hadoop-env.sh" line="export HADOOP_NAMENODE_INIT_HEAPSIZE={{HADOOP_NAMENODE_INIT_HEAPSIZE}}"

- name: Update HADOOP_OPTS
  lineinfile: dest="{{app_base}}/{{appname}}/etc/hadoop/hadoop-env.sh" line="export HADOOP_HEAPSIZE={{HADOOP_NAMENODE_INIT_HEAPSIZE}}"

- name: Update HADOOP_NAMENODE_OPTS
  lineinfile: dest="{{app_base}}/{{appname}}/etc/hadoop/hadoop-env.sh" line='export HADOOP_NAMENODE_OPTS="${HADOOP_NAMENODE_OPTS} {{HADOOP_NAMENODE_OPTS}}"'

- name: Update HADOOP_DATANODE_OPTS
  lineinfile: dest="{{app_base}}/{{appname}}/etc/hadoop/hadoop-env.sh" line='export HADOOP_DATANODE_OPTS="${HADOOP_DATANODE_OPTS} {{HADOOP_DATANODE_OPTS}}"'

- name: Update HADOOP_SECONDARYNAMENODE_OPTS
  lineinfile: dest="{{app_base}}/{{appname}}/etc/hadoop/hadoop-env.sh" line='export HADOOP_SECONDARYNAMENODE_OPTS="${HADOOP_NAMENODE_OPTS} {{HADOOP_SECONDARYNAMENODE_OPTS}}"'

- name: Update mapred-env.sh ENV JAVA_HOME
  lineinfile: dest="{{app_base}}/{{appname}}/etc/hadoop/mapred-env.sh" line="export JAVA_HOME={{JAVA_HOME}}"

- name: Update yarn-env.sh ENV JAVA_HOME
  lineinfile: dest="{{app_base}}/{{appname}}/etc/hadoop/yarn-env.sh" line="export JAVA_HOME={{JAVA_HOME}}"

3. 運行ansible命令,完成Hadoop集羣搭建然後遠程SSH到namenode節點直接使用NameNode集羣上的start-dfs.sh/start-yarn.sh啓動整個集羣,並驗證集羣啓動成功

好了,到了激動人心的是時刻啦:

$] ansible-playbook hadoop.yaml -i production/hosts
PLAY [cluster] *********************************************************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************************************
ok: [master1]
ok: [master2]
ok: [master1]

TASK [hadoop : Create Hadoop Data Directory] ***************************************************************************************************************************************************
ok: [slave1]
ok: [master2]
ok: [master1]

TASK [hadoop : Create DataNode Data Directory] *************************************************************************************************************************************************
ok: [master1]
ok: [slave1]
ok: [master2]

TASK [hadoop : Create NameNode Data Directory] *************************************************************************************************************************************************
ok: [slave1]
ok: [master1]
ok: [master2]

TASK [hadoop : Create JOURNAL Data Directory] **************************************************************************************************************************************************
ok: [master2]
ok: [slave1]
ok: [master1]

TASK [hadoop : Download hadoop file] ***********************************************************************************************************************************************************
ok: [slave1]
ok: [master2]
ok: [master1]
......

TASK [hadoop : Update yarn-env.sh ENV JAVA_HOME] ***********************************************************************************************************************************************
ok: [master2]
ok: [master1]
ok: [slave1]

PLAY RECAP *************************************************************************************************************************************************************************************
master1 : ok=29   changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
master2 : ok=29   changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
slave1 : ok=29   changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0  

然後SSH到master1上去啓動集羣

sbin]# ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /data/bigdata/app/hadoop/logs/yarn-root-resourcemanager-master.out
......

使用jps查看進程

~]$ jps
1938861 Jps
3240019 JournalNode
433395 ResourceManager
3239463 NameNode
1280395 QuorumPeerMain
433534 NodeManager
1090267 Kafka
3240349 DFSZKFailoverController
3239695 DataNode

打開瀏覽器,輸入:http://master1:50070/ 查看HDFS詳情,如果能夠打開,那麼namenode1啓動成功,同理,輸入
master2:50070/,如果能夠打開,那麼namenode2啓動正常,能夠看到其中一個顯示

Overview ‘master1:8020’ (active)

另一個顯示

Overview ‘master2:8020’ (standby)

(或者master1 standy,master2 active)
就表明我們配置的HDFS HA成功。

接着打開: http://master1:8088/cluster 查看Yarn ResourceManager詳情,如果能夠正常打開,表明我們的Yarn也啓動成功。

整個Hadoop高可用集羣就搭建完成了。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章