VMware CentOS6.5 Hadoop3+Zookeeper集羣搭建

 

一、vmware Centos6.5虛擬機準備

1. 使用同一用戶learn ,安裝三臺CentOS6.5系統虛擬機,分別爲vm1, vm2,    vm3。

vm1 和vm2 作爲 nameNode + DataNode

vm3  只作爲DataNode

2. 通過NAT或者橋接模式分別爲vm1,vm2, vm3設置網絡。文中採用NAT方式, ip地址分別爲:

     vm1: 192.168.60.128  vm1.learn.com

     vm2: 192.168.60.130  vm2.learn.com

     vm2: 192.168.60.131  vm3.learn.com

保證相互之間可以ping通, 且/etc/hosts文件配置各節點host。

註釋掉以下兩行,或者不配置vm1.learn.com和127.0.0.1/::1的映射:

127.0.0.1  vm1.learn.com  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1        vm1.learn.com localhost localhost.localdomain localhost6 localhost6.localdomain6

vm1.learn.com 不能映射到127.0.0.1或者::1中, 否則無法啓動hadoop集羣failed on connection exception: java.net.ConnectException: 拒絕連接。

參考: https://wiki.apache.org/hadoop/ConnectionRefused

 

3. 爲每臺機器設置SSH免密登錄, 參見:https://blog.csdn.net/zhujq_icode/article/details/82629745

 

二、Zookeeper集羣搭建 

參見:https://blog.csdn.net/zhujq_icode/article/details/82687037

三、Hadoop3集羣搭建

1. 下載Hadoop3 安裝包, 文中下載的是:hadoop-3.0.3.tar.gz

2. vm1中解壓安裝包至安裝目錄/home/learn/app/hadoop/:

tar -zxvf hadoop-3.0.3.tar.gz -C /home/learn/app/hadoop/

3. 配置環境變量, 添加至 /etc/profile

export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_181
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib

export HADOOP_HOME=/home/zhujq/app/hadoop/hadoop-3.0.3
export PATH=${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${PATH}

若未安裝java, 先安裝java 8.

4. 配置hadoop

(1)hadoop-env.sh文件設置JAVA_HOME

export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_181

(2)core-site.xml

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://mycluster</value>
  </property>

  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/learn/data/hadoopcluster/tmp</value>
  </property>
  <property>
    <name>ha.zookeeper.quorum</name>
    <value>vm1.learn.com:2181,vm2.learn.com:2181,vm3.learn.com:2181</value>
   </property>

</configuration>

(3)hdfs-site.xml

<configuration>
	<property>
	  <name>dfs.nameservices</name>
	  <value>mycluster</value>
	</property>

	<!-- mycluster下面有兩個NameNode,分別是nn1,nn2 -->
	<property>
	  <name>dfs.ha.namenodes.mycluster</name>
	  <value>nn1,nn2</value>
	</property>
	<property>
	  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
	  <value>vm1.learn.com:9820</value>
	</property>
	<property>
	  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
	  <value>vm2.learn.com:9820</value>
	</property>

	<property>
	  <name>dfs.namenode.http-address.mycluster.nn1</name>
	  <value>vm1.learn.com:9870</value>
	</property>
	<property>
	  <name>dfs.namenode.http-address.mycluster.nn2</name>
	  <value>vm2.learn.com:9870</value>
	</property>
	
    <!-- 開啓NameNode失敗自動切換 -->
	<property>
      <name>dfs.ha.automatic-failover.enabled</name>
      <value>true</value>
     </property>
	<!-- 指定NameNode的元數據在JournalNode上的存放位置 -->
	<property>
	  <name>dfs.namenode.shared.edits.dir</name>
	  <value>qjournal://vm1.learn.com:8485;vm2.learn.com:8485;vm3.learn.com:8485/mycluster</value>
	</property>
	<!-- 指定JournalNode在本地磁盤存放數據的位置 -->
	<property>
	  <name>dfs.journalnode.edits.dir</name>
	  <value>/home/learn/data/hadoopcluster/data/journaldata/jn</value>
	</property>
	<!-- 配置失敗自動切換實現方式 -->
	<property>
	  <name>dfs.client.failover.proxy.provider.mycluster</name>
	  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<!-- 配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行-->
	<property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
    </property>
	<!-- 使用sshfence隔離機制時需要ssh免登陸 -->
    <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/home/learn/.ssh/id_rsa</value>
    </property>
	<!-- 配置sshfence隔離機制超時時間 -->
	<property>
      <name>dfs.ha.fencing.ssh.connect-timeout</name>
      <value>30000</value>
    </property>


</configuration>

(4)mapred-site.xml

<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

採用yarn框架

(5)  yarn-site.xml

<configuration>

	<!-- 開啓RM高可靠 -->
	<property>
		<name>yarn.resourcemanager.ha.enabled</name>
		<value>true</value>
	</property>

	<property>
		<name>yarn.resourcemanager.cluster-id</name>
		<value>yarn-rm-cluster</value>
	</property>
	
	<property>
		<name>yarn.resourcemanager.ha.rm-ids</name>
		<value>rm1,rm2</value>
	</property>
	
	<property>
		<name>yarn.resourcemanager.hostname.rm1</name>
		<value>vm1.learn.com</value>
	</property>
	<property>
		<name>yarn.resourcemanager.hostname.rm2</name>
		<value>vm2.learn.com</value>
	</property>
	

	<property>
		<name>yarn.resourcemanager.zk-address</name>
		<value>vm1.learn.com:2181,vm2.learn.com:2181,vm3.learn.com:2181</value>
	</property>

	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
		<value>org.apache.hadoop.mapred.ShuffleHandler</value>
	</property>
</configuration>

(6) workers節點配置

vm1.learn.com
vm2.learn.com
vm3.learn.com

 

5. 複製安裝文件及配置文件至另外兩臺機器vm2,vm3

四、集羣初始化

1. vm2節點上初始化HDFS(格式化文件系統)

hdfs namenode -format

將初始化後的文件複製到同作爲nameNode的vm1機器上:

[learn@ ~]# scp -r /home/learn/data/hadoopcluster/tmp/ [email protected]:/home/learn/data/hadoopcluster/
fsimage_0000000000000000000.md5               100%   62     0.1KB/s   00:00    
seen_txid                                     100%    2     0.0KB/s   00:00    
fsimage_0000000000000000000                   100%  350     0.3KB/s   00:00    
VERSION                                       100%  200     0.2KB/s   00:00 

2. vm2節點格式化ZK

hdfs zkfc -formatZK

五、啓動hadoop集羣

 1.  vm2啓動dfs

[learn@vm2 sbin]$ ./start-dfs.sh
Starting namenodes on [vm1.learn.com vm2.learn.com]
Starting datanodes
Starting journal nodes [vm1.learn.com vm2.learn.com vm3.learn.com]
2018-09-13 15:15:54,975 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [vm1.learn.com vm2.learn.com]

2. 通過jps觀察vm1,vm2,vm3是否都正常啓動

//vm1 NameNode DataNode
[learn@vm1 sbin]$ jps
9969 Jps
9618 DataNode
9867 DFSZKFailoverController
9726 JournalNode
3118 QuorumPeerMain
9534 NameNode


//vm2 NameNode DataNode
[learn@vm2 sbin]$ jps
10913 NameNode
2562 QuorumPeerMain
11268 JournalNode
11597 Jps
11037 DataNode
11486 DFSZKFailoverController


//vm3 DataNode
[learn@vm3 sbin]$ jps
6067 JournalNode
2581 QuorumPeerMain
6166 Jps
5963 DataNode

3. WEB UI http://vm1.learn.com:9870/ http://vm2.learn.com:9870/

4. vm1啓動yarn

jps發現vm1 和 vm2節點新增進程:

11845 NodeManager
11767 ResourceManager

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章