記錄在64位CentOS 7環境下搭建Hadoop 2.7集羣的步驟,這些記錄都僅供參考!
1、操作系統環境配置
1.1、操作系統環境
主機名 | IP地址 | 角色 | Hadoop用戶 |
---|---|---|---|
hadoop-master | 192.168.30.60 | NameNode、ResourceManager、SecondaryNameNode | hadoop |
hadoop-slave01 | 192.168.30.61 | DataNode、NodeManager | hadoop |
hadoop-slave02 | 192.168.30.62 | DataNode、NodeManager | hadoop |
hadoop-slave03 | 192.168.30.63 | DataNode、NodeManager | hadoop |
1.2、關閉防火牆和SELinux
1.2.1、關閉防火牆
$ systemctl stop firewalld
$ systemctl disable firewalld
1.2.2、關閉SELinux
$ setenforce 0
$ sed -i 's/enforcing/disabled/' /etc/sysconfig/selinux
注:以上操作需要使用root用戶
1.3、hosts配置
$ vi /etc/hosts
########## Hadoop host ##########
192.168.30.60 hadoop-master
192.168.30.61 hadoop-slave01
192.168.30.62 hadoop-slave02
192.168.30.63 hadoop-slave03
注:以上操作需要使用root用戶,通過ping 主機名可以返回對應的IP即可
1.4、配置無密碼訪問
首先要創建hadoop用戶,然後在4臺主機上使用hadoop用戶配置無密碼訪問,所有主機的操作相同,以hadoop-master爲例
生成私鑰和公鑰$ ssh-keygen -t rsa
拷貝公鑰到主機(需要輸入密碼)
$ ssh-copy-id hadoop@hadoop-master
$ ssh-copy-id hadoop@hadoop-slave01
$ ssh-copy-id hadoop@hadoop-slave02
$ ssh-copy-id hadoop@hadoop-slave03
注:以上操作需要在hadoop用戶,通過hadoop用戶ssh到其他主機不需要密碼即可。
2、Java環境配置
2.1、下載JDK
注:使用hadoop用戶操作
$ cd /home/hadoop
$ curl -o jdk-8u151-linux-x64.tar.gz http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz?AuthParam=1516091623_fa4174d4b1eed73f36aa38230498cd48
2.2、安裝java
安裝java可使用hadoop用戶操作;
$ mkdir -p /home/hadoop/app/java
$ tar -zxf jdk-8u151-linux-x64.tar.gz
$ mv jdk1.8.0_151 /home/hadoop/app/java/jdk1.8
- 配置Java環境變量:
$ vi /home/hadoop/.bash_profile
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
啓用環境變量$ source /home/hadoop/.bash_profile
注:通過java –version
命令返回Java的版本信息即可
3、Hadoop安裝配置
hadoop的安裝配置使用hadoop用戶操作;
3.1、安裝Hadoop
- 下載hadoop 2.7.5
$ curl -O http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.5/hadoop-2.7.5.tar.gz
- 創建hadoop安裝目錄
$ mkdir -p /home/hadoop/app/hadoop/{tmp,hdfs/{data,name}}
- 解壓hadoop文件並移動到hadoop安裝目錄下
$ tar zxf tar -zxf hadoop-2.7.5.tar.gz -C /home/hadoop/app/hadoop
3.2、配置Hadoop
Hadoop配置文件都是XML文件,使用hadoop用戶操作即可;
3.2.1、配置core-site.xml
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/app/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
</configuration>
3.2.2、配置hdfs-site.xml
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/app/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/app/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
3.2.3、配置mapred-site.xml
mapred-site.xml需要從一個模板拷貝在修改$ cp /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/mapred-site.xml.template /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/mapred-site.xml
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop-master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop-master:19888</value>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/history/done</value>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/history/done_intermediate</value>
</property>
</configuration>
3.2.4、配置yarn-site.xml
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop-master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop-master:8088</value>
</property>
</configuration>
3.2.5、配置slaves
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/slaves
hadoop-slave01
hadoop-slave02
hadoop-slave03
3.2.6、配置hadoop-env
修改hadoop-env.sh文件的JAVA_HOME環境變量,操作如下:$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
3.2.7、配置yarn-env
修改yarn-env.sh文件的JAVA_HOME環境變量,操作如下:$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/yarn-env.sh
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
3.2.8、配置mapred-env
修改mapred-env.sh文件的JAVA_HOME環境變量,操作如下:$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/mapred-env.sh
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
3.3、拷貝Hadoop程序到slave
$ scp -r /home/hadoop/app/hadoop hadoop@hadoop-slave01:/home/hadoop/app/
$ scp -r /home/hadoop/app/hadoop hadoop@hadoop-slave02:/home/hadoop/app/
$ scp -r /home/hadoop/app/hadoop hadoop@hadoop-slave03:/home/hadoop/app/
3.4、配置Hadoop環境變量
在所有機器hadoop用戶家目錄下編輯 .bash_profile 文件,在最後追加:$ vi /home/hadoop/.bash_profile
### Hadoop PATH
export HADOOP_HOME=/home/hadoop/app/hadoop/hadoop-2.7.5
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
讓環境變量生效:source /home/hadoop/.bash_profile
注:這是配置hadoop的用戶環境變量,如果系統變量設置在 /etc/profile.d/ 目錄下新增
3.5、啓動Hadoop
在hadoop主節點上初始化HDFS文件系統,然後啓動hadoop集羣
3.5.1、初始化HDFS文件系統
$ hdfs namenode –format
3.5.2、啓動和關閉Hadoop集羣
- 啓動:
$ start-all.sh
注:在mapreduce.site.xml中配置了jobhistory,需要啓動日誌記錄服務:$ mr-jobhistory-daemon.sh start historyserver
- 關閉:
$ stop-all.sh
注:也可以一步一步執行啓動,首先啓動namenode-->datanode-->YARN -->NodeManagers -->historyserver
-
master進程:
$ jps 3124 NameNode 3285 SecondaryNameNode 3451 ResourceManager 4254 Jps
-
slave進程:
$ jps 3207 Jps 2409 NodeManager 2332 DataNode
- MapReducer PI運算
$ hadoop jar /home/hadoop/app/hadoop/hadoop-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar pi 5 10
返回的結果是:Estimated value of Pi is 3.28000000000000000000
- YARN管理界面:http://192.168.30.60:8088
- HDFS管理界面:http://192.168.30.60:50070
3.6、MapReduce wordcount測試
- 創建存放文件目錄和計算結果輸出目錄
$ hadoop fs -mkdir /user/hadoop/input $ hadoop fs -mkdir /user/hadoop/output
- 上傳測試文件The_Man_of_Property
$ hadoop fs -put The_Man_of_Property /user/hadoop/input
-
啓動測試
$ hadoop jar /home/hadoop/app/hadoop/hadoop-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar wordcount /user/hadoop/input /user/hadoop/output/wordcounttest
- 查看輸出結果
$ hadoop fs -ls /user/hadoop/output/wordcounttest
Found 2 items
-rw-r--r-- 3 hadoop supergroup 0 2018-01-17 14:32 /user/hadoop/output/wordcounttest/_SUCCESS
-rw-r--r-- 3 hadoop supergroup 181530 2018-01-17 14:32 /user/hadoop/output/wordcounttest/part-r-00000
$ hadoop fs -get /user/hadoop/output/wordcounttest/part-r-00000 ./
$ cat part-r-00000 |sort -k2 -nr|head
the 5144
of 3407
to 2782
and 2573
a 2543
he 2139
his 1912
was 1702
in 1694
had 1526
4、參考資料
https://hadoop.apache.org/docs/r2.7.5/hadoop-project-dist/hadoop-common/ClusterSetup.html