hadoop 3.1.1的集羣搭建並完成高可用配置(詳細版)

一、簡介

hadoop是Apache基金會的一個頂級項目,最早期版本是十多年前發佈的,隨着飛速的迭代更新,2018年已經更新到了3.1.1版本。網絡上大多數都是舊版本的配置,本文卻是最新版本的hadoop的配置方法。本文以hadoop 3.1.1爲例,講述如何從零開始搭建好hadoop集羣。

二、準備工作

集羣資源配置總覽

NameNode JournalNode DataNode ResourceManager zookeeper
node1
node2
node3
node4

1.安裝平臺

安裝平臺,是指hadoop軟件需要搭建在linux系統中。國內有兩大知名linux平臺,分別是Ubuntu和Centos。
給出官網的下載地址:
Ubuntu : https://www.ubuntu.com/download/desktop
Centos : https://www.centos.org/download/

2.軟件包

(1) JDK: hadoop是基於java進行開發的,所有hadoop運行需要JVM的支持,作者使用的是jdk1.8的版本,下載地址:
https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
如圖:
在這裏插入圖片描述
(2) Hadoop: 給出官網鏡像下載地址:http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-3.1.1/
如圖:
hadoop3.1.1下載
(3) zookeeper
下載地址:http://archive.apache.org/dist/zookeeper/zookeeper-3.4.6/
zookeeper
(4)常用軟件
Windows端:推薦使用VMware虛擬機,xshell:linux連接管理工具,xftp:上傳文件到linux 工具
MacOs端:推薦使用VMware虛擬機,zoc7:linux連接管理工具,FileZila:上傳文件到linux 工具

二、讓我們開始吧

1.配置靜態ip

修改網絡配置文件,以centos7爲例。

vi /etc/sysconfig/network-scripts/ifcfg-eth0

設置如下:

++++++++++++++++++++++++++++++++
DEVICE="eth0"
BOOTPROTO="static" #將原來的值“dhcp”改爲“static”
HWADDR="00:0C:29:F2:4E:96"
IPV6INIT="yes"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
UUID="b68b1ef8-13a0-4d11-a738-1ae704e6a0a4"
IPADDR=192.168.1.16    #你需要定義的IP地址
NETMASK=255.255.255.0 #子網掩碼
GATEWAY=192.168.1.1    #默認網關,
++++++++++++++++++++++++++++++++

保存退出
重啓網絡服務

service network restart

檢查一下狀態

ifconfig -a
+++++++++++++++++++++++++++++++++

ens33 	  Link encap:Ethernet  HWaddr 00:0C:29:F2:4E:96  
          inet addr:192.168.1.16  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fef2:4e96/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17017 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9586 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7803412 (7.4 MiB)  TX bytes:1613751 (1.5 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:21844 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21844 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2042507 (1.9 MiB)  TX bytes:2042507 (1.9 MiB)

+++++++++++++++++++++++++++++++++

到了這裏,請ping一下外網

ping baidu.com

我遇到兩個情況:

1)ping通了,oh yeah!

2)報錯:ping:unknow host baidu.com

好吧,我的解決方法是:

dhclient

敲了這個命令後,再ping一次

  ---------------------------------
  PING baidu.com (180.149.132.47) 56(84) bytes of data.
  64 bytes from 180.149.132.47: icmp_seq=1 ttl=54 time=38.3 ms
  64 bytes from 180.149.132.47: icmp_seq=2 ttl=54 time=38.7 ms
  64 bytes from 180.149.132.47: icmp_seq=3 ttl=54 time=49.7 ms
  64 bytes from 180.149.132.47: icmp_seq=4 ttl=54 time=38.1 ms
  64 bytes from 180.149.132.47: icmp_seq=5 ttl=54 time=37.9 ms
  64 bytes from 180.149.132.47: icmp_seq=6 ttl=54 time=38.3 ms
  ---------------------------------

反正我是這樣解決的
  還有人是這樣:配置靜態IP之後reboot
還有一個情況就是,ping 外網IP可以,但是無法ping域名。我的解決辦法是:設置DNS

  vi /ect/resolv.conf
  nameserver 114.114.114.114 //這個值我是在本地連接的狀態信息裏找到的

保存之後退出,再ping!

2.配置免密鑰

(1)原理
免密鑰
(2)配置方法

鍵入ssh-keygen -t rsa,如圖
ssh
進入ssh主目錄,鍵入cd ~/.ssh
~/.ssh
其中id_rsa是祕鑰,id_rsa.pub是公鑰文件
將密鑰追加到文件authorized_keys中:cat id_rsa.pub >> authorized_keys
這就實現了對本機的免密鑰
要操作其他節點,首先需要將 id_rsa.pub分發給其他節點

scp  id_rsa.pub 用戶名@主機名或ip地址:"目錄"

然後根據上面的追加方法重複執行。

3.安裝jdk 1.8以及配置環境變量

(1)解壓jdk安裝包

tar xzvf jdk-8u191-linux-x64.tar.gz

(2)配置環境變量
以centos7爲例:vi /etc/profile
在末尾追加:

export JAVA_HOME=/usr/java/jdk1.8.0_181(根據實際路徑改動)
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

4.安裝zookeeper集羣管理工具

(1)解壓

 tar zxvf zookeeper-3.4.6.tar.gz

(2)配置
我上一篇博客有詳細講解,這裏不多贅述,傳送門:https://blog.csdn.net/u011328843/article/details/84190285
zoo.cfg配置文件如下:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeepertmp
dataLogDir=/opt/zookeepertmp/log
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=node2:2888:3888
server.2=node3:2888:3888
server.3=node4:2888:3888

記得創建dataDir=/opt/zookeepertmp和dataLogDir=/opt/zookeepertmp/log文件夾

(3)配置環境變量並啓動
環境變量:

export ZOOKEEPER_HOME=/opt/zookeeper-3.4.6
export PATH=$ZOOKEEPER_HOME/bin:$PATH

啓動命令: zkServer.sh start

5.安裝hadoop 3.1.1完全分佈式以及配置高可用

(1)解壓
用軟件將"hadoop-3.1.1.tar.gz"文件上傳到linux中,解壓到當前命令如下:

tar zxvf hadoop-3.1.1.tar.gz

(2)配置
在這裏,我就不闡述原理了,具體原理以後會發新貼,直接給出配置文件內容。
首先配置環境變量:

export HADOOP_HOME=/opt/hadoop-3.1.1(視實際情況而定)
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib

配置文件在下面的目錄:

  cd hadoop-3.1.1/etc/hadoop/

需要改動的主要就是hdfs-site.xml core-site.xml yarn-site.xml mapred-env.sh hadoop-env.sh yarn-env.sh

hdfs.site.xml

<configuration>
<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<property>
  <name>dfs.namenode.secondary.http-address</name>
  <value>node2:9869</value>
</property>
<property>
   <name>dfs.nameservices</name>
   <value>mycluster</value>
</property>
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>node1:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>node2:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>node1:9870</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>node2:9870</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://node1:8485;node2:8485/mycluster</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
</property>
<property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/root/.ssh/id_rsa</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/opt/hadooptmp/ha/journalnode</value>
</property>
<property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
</property>
</configuration>

core-site.xml

<configuration>
<property>
        <name>fs.defaultFS</name>
        <value>hdfs://mycluster</value>
</property>
<property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadooptmp/ha</value>
</property>
<property>
        <name>hadoop.http.staticuser.user</name>
        <value>root</value>
</property> 
<property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadooptmp/ha</value>
</property>
<property>
   		<name>ha.zookeeper.quorum</name>
  	    <value>node2:2181,node3:2181,node4:2181</value>
 </property>
</configuration>

mapred-site.xml

<configuration>
   <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
   </property>
   <property>
     	<name>mapreduce.application.classpath</name>
     	<value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
   </property>
</configuration>

yarn-site.xml

<configuration>
	<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>node1(你的節點別名)</value>
    </property>
    <property>
      	  <name>yarn.nodemanager.env-whitelist</name>
          <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
   <property>
            <name>yarn.resourcemanager.webapp.address</name>
            <value>node1:8088</value>
    </property>
    <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
    </property>
</configuration>	

hadoop3.1需要在啓動文件聲明用戶名等信息,如下所示:
在 hadoop-env.sh 文件下面添加如下內容:

export JAVA_HOME=/usr/java/jdk1.8.0_181
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_ZKFC_USER=root
export HDFS_JOURNALNODE_USER=root

在 yarn-env.sh 文件下面添加如下內容:

export YARN_RESOURCEMANAGER_USER=root
export HADOOP_SECURE_DN_USER=yarn
export YARN_NODEMANAGER_USER=root

不要忘記修改hadoop根目錄/etc/hadoop/workers文件,否則集羣不能啓動。
填寫的是DataNode的主機名

node2
node3
node4

(3)高可用集羣啓動順序:

  1. 啓動zookeeper

      zkServer.sh start(在node2,3,4節點運行)
    
  2. 啓動journalnode

      手動啓動所有journalNode節點的journalNode功能(Node1, Node2)
      #Hadoop 2.X啓動方式 
      hadoop-daemon.sh start journalnode
      #Hadoop 3.X啓動方式
      hdfs --daemon start journalnode 
    
  3. 在其中一臺NameNode格式化zkfc

      hdfs zkfc -formatZK  
    
  4. 格式化主節點namenode格式化主節點namenode,並啓動

    hdfs namenode -format //格式化
    hdfs --daemon start namenode //打開NameNode節點
    
  5. 副節點同步主節點格式化

    hdfs namenode -bootstrapStandby
    
  6. 啓動集羣

    start-all.sh
    

(4)完成效果
node1的web端頁面(Overview)
pic1
node1的web端頁面(Datanode Information)
pic2
node2的web端頁面(Overview)
pic3
node1的Resourse Manager頁面
Resourse Manager
當你做到這邊,你就大功告成了,祝賀你,請持續關注我的csdn,你會有新的收穫!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章