一、NameNode的聯盟(Federation)
-
接收客戶端的請求
-
緩存1000M的元信息
-
問題:
(1)分攤NameNode壓力
(2)緩存更多的元信息搭建NameNode的聯盟
1、規劃
NameNode:bigdata112 bigdata113
DataNode: bigdata114 bigdata1152、在bigdata112上搭建
core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/root/training/hadoop-2.7.3/tmp</value>
</property>
hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>ns1,ns2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1</name>
<value>192.168.157.112:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1</name>
<value>192.168.157.112:50070</value>
</property>
<property>
<name>dfs.namenode.secondaryhttp-address.ns1</name>
<value>192.168.157.112:50090</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns2</name>
<value>192.168.157.113:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns2</name>
<value>192.168.157.113:50070</value>
</property>
<property>
<name>dfs.namenode.secondaryhttp-address.ns2</name>
<value>192.168.157.113:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>192.168.157.112</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
slaves
bigdata114
bigdata115
配置路由規則(viewFS)
修改core-site.xml文件,直接加入以下內容:
注意:xdl1:是聯盟的名字
<property>
<name>fs.viewfs.mounttable.xdl1.homedir</name>
<value>/home</value>
</property>
<property>
<name>fs.viewfs.mounttable.xdl1.link./movie</name>
<value>hdfs://192.168.157.112:9000/movie</value>
</property>
<property>
<name>fs.viewfs.mounttable.xdl1.link./mp3</name>
<value>hdfs://192.168.157.113:9000/mp3</value>
</property>
<property>
<name>fs.default.name</name>
<value>viewfs://xdl1</value>
</property>
注意:如果路由規則太多了,造成core-site.xml文件不好維護
可以單獨創建一個xml文件來保存路由規則:mountTable.xml ----> 加入到core-site.xml
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ViewFs.html
3、複製到其他的節點
scp -r hadoop-2.7.3/ root@bigdata113:/root/training
scp -r hadoop-2.7.3/ root@bigdata114:/root/training
scp -r hadoop-2.7.3/ root@bigdata115:/root/training
4、在每個NameNode(bigdata112和bigdata113)上單獨進行格式化
hdfs namenode -format -clusterId xdl1
5、啓動
6、根據路由規則,在每個Namenode上創建對應的目錄
hadoop fs -mkdir hdfs://192.168.157.112:9000/movie
hadoop fs -mkdir hdfs://192.168.157.113:9000/mp3
7、操作HDFS
[root@bigdata112 training]# hdfs dfs -ls /
Found 2 items
-r-xr-xr-x - root root 0 2018-10-05 01:11 /movie
-r-xr-xr-x - root root 0 2018-10-05 01:11 /mp3
注意:看到的是viewFS