Hadoop-hdfs-ha 模式搭建教程
前言:
hdfs集羣存在問題:
1、NameNode 單點故障
2、NameNode 壓力過大,內存受限
解決方案:
單點故障:
高可用方案:HA(High Available)
多個NameNode,主備切換
壓力過大,內存受限:
聯幫機制: Federation(元數據分片)
多個NameNode,管理不同的元數據
高可用方案原理圖:HA(High Available)
搭建步驟:
1、基礎環境
centos7
java1.8
hadoop-2.9.2.tar.gz
zookeeper-3.4.8.tar.gz
2、角色規劃
ip | host | namenode | namenode | journalnode | zkfc | zk | datanode |
---|---|---|---|---|---|---|---|
192.168.116.128 | hadoop1 | √ | √ | √ | |||
192.168.116.129 | hadoop2 | √ | √ | √ | √ | √ | |
192.168.116.130 | hadoop3 | √ | √ | √ | |||
192.168.116.131 | hadoop4 | √ | √ |
3、基礎配置:
設置主機名
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop1
設置本機的ip到主機名的映射關係
vi /etc/hosts
192.168.116.128 hadoop1
192.168.116.129 hadoop2
192.168.116.130 hadoop3
192.168.116.131 hadoop4
關閉防火牆
systemctl stop firewalld
systemctl disable firewalld
關閉 selinux
vi /etc/selinux/config
SELINUX=disabled
做時間同步
yum install ntp -y
vi /etc/ntp.conf
server ntp1.aliyun.com
service ntpd start
chkconfig ntpd on
ssh免密:
ssh localhost 1,驗證自己還沒免密 2,被動生成了 /root/.ssh
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
4、安裝 :
cd /
mkdir local
安裝JDK:
mkdir java
上傳 jdk-8u171-linux-i586.tar.gz 到 /local/java 目錄
tar -zxvf jdk-8u171-linux-i586.tar.gz
安裝hadoop:
mkdir hadoop
上傳 hadoop-2.9.2.tar.gz 到 /local/hadoop 目錄
tar -zxvf hadoop-2.9.2.tar.gz
配置環境變量
vi /etc/profile
export JAVA_HOME=/local/java/dk1.8.0_171
export HADOOP_HOME=/local/hadoop/hadoop-2.9.2
export ZOOKEEPER_HOME=/opt/bigdata/zookeeper-3.4.6
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin
export CLASSPATH=.
source /etc/profile
安裝zookeeper:
mkdir zookeeper
上傳 zookeeper-3.4.8.tar.gz 到 /local/zookeeper 目錄
tar -zxvf zookeeper-3.4.8.tar.gz
5、修改配置
配置hadoop:
cd $HADOOP_HOME/etc/hadoop
#必須給hadoop配置javahome要不ssh過去找不到
vi hadoop-env.sh
export JAVA_HOME=/local/java/dk1.8.0_171
#修改 core-site.xml
vi core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop2:2181,hadoop3:2181,hadoop4:2181</value>
</property>
#修改 hdfs-site.xml
vi hdfs-site.xml
#副本數爲2
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/local/hadoop/bigdata/ha/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/local/hadoop/bigdata/dfs/data</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>hadoop1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>hadoop2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>hadoop1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>hadoop2:50070</value>
</property>
#以下是JN在哪裏啓動,數據存那個磁盤
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/local/hadoop/bigdata/dfs/jn</value>
</property>
#HA角色切換的代理類和實現方法,我們用的ssh免密
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
#開啓自動化: 啓動zkfc
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
#配置datanode這個角色再那裏啓動
vi slaves
hadoop2
hadoop3
hadoop4
同理在 hadoop2、hadoop3、hadoop4 安裝hadoop scp
配置zookeeper:
進入hadoop2
cd $ZOOKEEPER_HOME/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
datadir=/local/zookeeper/zkdata
server.1=hadoop2:2888:3888
server.2=hadoop3:2888:3888
server.3=hadoop4:2888:3888
#編輯 myid
mkdir /local/zookeeper/zkdata
echo 1 > /local/zookeeper/zkdata/myid
同理在 hadoop3、hadoop4 安裝zookeeper , myid設置爲 2、3
6、啓動
1)先分別啓動zk
zkServer.sh start
#查看狀態
zkServer.sh status
2)分別啓動journalnode
hadoop-daemon.sh start journalnode
2)選擇一個namenode 做格式化:
hdfs namenode -format <只有第一次搭建做,以後不用做>
3)啓動這臺格式化的 namenode ,以備另外一臺同步
hadoop-daemon.sh start namenode
4)在另外一臺namenode 機器中同步 namenode:
hdfs namenode -bootstrapStandby
5)格式化zk:
hdfs zkfc -formatZK <只有第一次搭建做,以後不用做>
6)啓動hdfs
start-dfs.sh
7 、訪問
hadoop1:50070