Hadoop分佈式集羣搭建

前面分享了hadoop僞分佈式的搭建方法,這次來分享一下分佈式方式的搭建方法。
主機安裝或虛擬機安裝這裏就不再贅述,下面直接進入正題:

1.下面是網絡配置,全部爲靜態:

master : 192.168.80.128
slave-1 : 192.168.80.129
slave-2 : 192.168.80.130
slave-3 : 192.168.80.131
slave-4 : 192.168.80.132

2.修改hosts,加入如下內容:

192.168.80.128 master
192.168.80.129 slave1
192.168.80.130 slave2
192.168.80.131 slave3
192.168.80.132 slave4

3.修改各個主機的hostname,爲對應的名字

4.配置SSH免密碼登陸,保證五臺主機任意之間可以免密碼訪問

5.關閉五臺主機防火牆

$ sudo systemctl stop firewalld.service
$ sudo systemctl disable firewalld.service

必須關閉防火網,否則在存儲文件時,可能會出現以下錯誤:

[zhoupan@master ~]$ hadoop-2.8.0/bin/hadoop fs -put hadoop-2.8.0.tar.gz /data
17/07/14 01:47:58 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /input/hadoop-2.8.0.tar.gz._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481)
    at org.apache.hadoop.ipc.Client.call(Client.java:1427)
    at org.apache.hadoop.ipc.Client.call(Client.java:1337)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
    at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1733)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1536)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:658)
put: File /input/hadoop-2.8.0.tar.gz._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
[zhoupan@master ~]$ ls

6.配置java環境

export JAVA_HOME=/usr/local/jdk1.8.0_131
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

7.下載hadoop二進制安裝文件

$ wget http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz

8.假設安裝包放在家目錄下,解壓文件:

$ tar -zvxf hadoop-2.8.0.tar.gz

9.在~/hadoop-2.8.0下創建tmp,hdfs/data/,hdfs/name 目錄

$ mkdir ~/hadoop-2.8.0/tmp
$ mkdir ~/hadoop-2.8.0/hdfs
$ mkdir ~/hadoop-2.8.0/hdfs/data
$ mkdir ~/hadoop-2.8.0/hdfs/name

10.修改~/hadoop-2.8.0/etc/hadoop/core-site.xml 文件爲一下內容:

<configuration>
    <property>
        <!--下面配置文件用來指出NameNode的IP地址及其端口-->
        <name>fs.default.name</name>
        <value>hdfs://192.168.80.128:9000</value>
    </property>
    <property>
        <!--hadoop的本地臨時文件夾,默認是在/tmp下的,爲了方式由於重啓被清空,建議將其修改至其他目錄-->
        <name>hadoop.tmp.dir</name>
        <value>file:/home/zhoupan/hadoop-2.8.0/tmp</value>
    </property>
</configuration>

11.創建~/hadoop-2.8.0/etc/hadoop/mapred-site.xml 文件

cp ~/hadoop-2.8.0/etc/hadoop/mapred-site.xml.template ~/hadoop-2.8.0/etc/hadoop/mapred-site.xml

12.修改~/hadoop-2.8.0/etc/hadoop/mapred-site.xml文件爲一下內容

<configuration>
    <property>
            <!--JobTracker的IP地址及端口-->
             <name>mapred.job.tracker</name>
            <value>192.168.80.128:9001</value>
    </property>
</configuration>

13.修改~/hadoop-2.8.0/etc/hadoop/yarn-site.xml爲如下內容

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
</configuration>

14.修改~/hadoop-2.8.0/etc/hadoop/hdfs-site.xml爲如下內容

<configuration>
    <property>
        <!--NameNode存儲命名空間和彙報日誌的位置-->
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/zhoupan/hadoop-2.8.0/hdfs/name</value>
    </property>
    <property>
        <!--DataNode存儲數據塊的位置-->
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/zhoupan/hadoop-2.8.0/hdfs/data</value>
    </property>
</configuration>

15.修改修改~/hadoop-2.8.0/etc/hadoop/slaves爲如下內容

slave1
slave2
slave3
slave4

16.格式化Namenode

$ ~/hadoop-2.8.0/bin/hadoop namenode -format

17.啓動集羣:

$ ~/hadoop-2.8.0/sbin/start-all.sh

18.查看運行情況:

$ jps

master

這裏寫圖片描述

slave

這裏寫圖片描述

19.測試文件存儲

$ ~/hadoop-2.8.0/bin/hadoop fs -mkdir /data
$ ~/hadoop-2.8.0/bin/hadoop fs -ls 

這裏寫圖片描述

$ ~/hadoop-2.8.0/bin/hadoop fs -put hadoop-2.8.0.tar.gz /data/
$ ~/hadoop-2.8.0/bin/hadoop fs -ls /
$ ~/hadoop-2.8.0/bin/hadoop fs -ls /data/

這裏寫圖片描述

至此,hadoop算是搭建成功了。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章