步驟
- 在一臺主機上安裝好hadoop僞分佈式環境,作爲master節點。
- 用硬盤對拷的方式複製到另外兩臺slave主機,修改主機名etc/sysconfig/network 修改etc/hosts文件 把ip和主機名對應
- 修改conf/slaves conf/masters文件以及其他配置文件
- 把master主機的hadoop文件夾複製到slave主機上 sudo scp -r /usr/hadoop/hadoop-2.7.7 slave1:/usr/hadoop
配置文件
core-site.xml
<configuration>
<!-- 指定HDFS(namenode)的通信地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<!-- 指定hadoop運行時產生文件的存儲路徑 -->
<!-- 需要手動創建tmp文件夾並且把所有者改爲master或者賦予權限777 -->
<!-- 創建的是文件夾不是文件 應該用mkdir不應該用touch -->
<!-- chmod 777 /usr/hadoop/hadoop-2.7.7/tmp -->
<!-- chown -R master /usr/hadoop/hadoop-2.7.7/tmp-->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/hadoop-2.7.7/tmp</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<!-- 設置namenode的http通訊地址 -->
<property>
<name>dfs.namenode.http-address</name>
<value>master:50070</value>
</property>
<!-- 設置secondarynamenode的http通訊地址 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>slave1:50090</value>
</property>
<!-- 設置namenode存放的路徑 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/hadoop/hadoop-2.7.7/dfs/namenode</value>
</property>
<!-- 設置hdfs副本數量 -->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!-- 設置datanode存放的路徑 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/hadoop/hadoop-2.7.7/dfs/datanode</value>
</property>
</configuration>
mapred-site.xml
mv mapred-site.xml.template mapred-site.xml
<configuration>
<!-- 通知框架MR使用YARN -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- 設置 resourcemanager 在哪個節點-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<!-- reducer取數據的方式是mapreduce_shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
masters
新建一個masters的文件,這裏指定的是secondary namenode 的主機
slave1
slaves
slave1
slave2
- 創建文件夾:(不要用sudo mkdir 文件所有者不一樣)
mkdir tmp
- 複製到其他主機
scp -r /usr/hadoop/hadoop-2.7.7 slave1:/usr/hadoop/
scp -r /usr/hadoop/hadoop-2.7.7 slave2:/usr/hadoop/
- 在master上啓動
./sbin/start-dfs.sh
./sbin/start-yarn.sh
master jps:resourcemanager jps namenode
slave1 jps:jps secondarynamenode nodemanager datanode
slave2 jps:jps nodemanager datanode
問題彙總
secondarynamenode啓動不起來
- 需要單獨啓動 sbin/hadoop-daemon.sh start secondarynamenode
- 或者使用start-dfs.sh和start-yarn.sh 代替start-all.sh
-
報錯checkpoint directory does not exist or is not accessible 查看logs,可能是沒有權限讀取該文件夾,通過賦予hadoop用戶改文件夾權限來解決
chown -R sunsi usr/hadoop/hadoop-2.7.7/tmp
因爲當初使用sudo mkdir創建文件夾的原因
把文件上傳到HDFS,報錯Name node is in safe mode
bin/hadoop dfsadmin -safemode leave
master:50070 沒有數據顯示,向hdfs傳文件報錯:There are 0datanode(s) running and no node(s) are excluded in this operation
看了datanode的日誌: Retrying connect to server: master/192.168.139.95:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS),datanode和namenode無法建立連接。
從datanode節點ping namenode節點可以通,但是telnet9000端口,顯示no route to host 之後是connection refused
[sunsi@slave1 ~]$ telnet 192.168.139.95 9000
Trying 192.168.139.95...
telnet: connect to address 192.168.139.95: No route to host
[sunsi@slave1 ~]$ telnet 192.168.139.95 9000
Trying 192.168.139.95...
telnet: connect to address 192.168.139.95: Connection refused
關閉防火牆
[sunsi@master usr]$ sudo service iptables stop
iptables:將鏈設置爲政策 ACCEPT:filter [確定]
iptables:清除防火牆規則: [確定]
iptables:正在卸載模塊: [確定]
永久關閉防火牆
[sunsi@master usr]$ sudo chkconfig iptables off
查看9000端口是否打開,沒有顯示說明沒有開放
[sunsi@master usr]$ lsof -i:9000
開啓9000端口 修改/etc/sysconfig/iptables
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 9000 -j ACCEPT
重啓dfs和yarn,查看端口開放情況 ,之前顯示的是127.0.0.1:9000 只有本機可以訪問
[sunsi@master usr]$ netstat -tpnl
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN -
tcp 0 0 192.168.139.95:50070 0.0.0.0:* LISTEN 5799/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:631 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:58821 0.0.0.0:* LISTEN -
tcp 0 0 192.168.139.95:9000 0.0.0.0:* LISTEN 5799/java
tcp 0 0 :::111 :::* LISTEN -
tcp 0 0 :::22 :::* LISTEN -
tcp 0 0 ::ffff:192.168.139.95:8088 :::* LISTEN 6116/java
tcp 0 0 ::1:25 :::* LISTEN -
tcp 0 0 ::ffff:192.168.139.95:8030 :::* LISTEN 6116/java
tcp 0 0 ::ffff:192.168.139.95:8031 :::* LISTEN 6116/java
tcp 0 0 ::ffff:192.168.139.95:8032 :::* LISTEN 6116/java
tcp 0 0 ::ffff:192.168.139.95:8033 :::* LISTEN 6116/java
tcp 0 0 :::40714 :::* LISTEN -
向hdfs傳文件報錯:
NoRouteToHostException:No route 以及could only be replicated to 0 nodes,instead of 1
網上的教程都說是因爲沒有關閉防火牆
service iptables stop
可是之前使用過永久關閉防火牆的命令
[sunsi@master usr]$ sudo chkconfig iptables off
但是這個永久的命令似乎不太好用,使用service iptables stop 關閉之後就不再報錯,之前甚至把dfs/namenode以及dfs/datanode文件夾下的內容刪除,hdfs namenode -format(避免namenode和datanode的namespaceeID不一致)都不好用,還是防火牆的問題。