Ubuntu14.04配置Hadoop的真分佈模式教程

在Ubuntu14.04下配置Hadoop的真分佈模式

 

設備:

操作系統:兩臺 Ubuntu 14.04.5  32位版本

軟件:VMware 12

 

配置步驟

 

一、配置各個主機在同一個局域網下運行

將VMware的網絡設置爲橋接模式(對應更改爲自己要橋接到的適配器),然後同時也把虛擬機的網絡設置爲橋接模式。

 

二、虛擬機配置

1. 爲了便於區別其餘主機名字,對本機ubuntu進行更改主機名

命令:sudo vim /etc/hostname

(對應地進行更改)

master節點:master

slave節點:hadoop-LZW

 

2. 添加局域網內的集羣虛擬主機IP,映射IP

命令:sudo vim /etc/hosts

添加內容:

127.0.0.1   localhost



192.168.191.91  master

192.168.191.94  hadoop-LZW



# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback

fe00::0 ip6-localnet

ff00::0 ip6-mcastprefix

ff02::1 ip6-allnodes

ff02::2 ip6-allrouters

3. 設置靜態IP (ubuntu 14.04,其他版本略有差別)

默認網關爲192.168.191.1,子網掩碼爲255.255.255.0

命令:sudo vim /etc/network/interfaces

添加以下內容:

auto eth0

iface eth0 inet static

address 192.168.191.91

netmask 255.255.255.0

gateway 192.168.191.1

dns-nameservers 211.136.20.203

另外修改文件/etc/NetworkManager/NetworkManager.conf,將managed參數設置爲true

命令:sudo vim /etc/NetworkManager/NetworkManager.conf

 

4. 重啓網絡

命令:/etc/init.d/networking restart

(若以上命令無效可以重啓虛擬機)

 

二、設置ssh免密登錄

設置ssh免密登錄操作已經在僞分佈式配置部分配置好了,還沒配置的可以訪問這篇教程:Ubuntu14.04配置Hadoop的本地模式和僞分佈模式教程

此處需要做的是將master節點(本機)的公鑰傳輸給slave節點

命令:scp ~/.ssh/id_rsa.pub hadoop@hadoop-LZW:~/.ssh/authorized_keys

 

然後對文件進行賦予權限

命令:chmod 644 ~/.ssh/authorized_keys

 

以上命令只能滿足master節點與各個slave節點的連通,若要實驗各個節點能互相連通,可以改成:先從各個slave節點的公鑰發送給master節點,然後再從master逐個發送公鑰給各個slave節點即可。

發送公鑰過程需要輸入slave節點的登錄密碼。

 

三、配置環境變量

進入bashrc文件,添加以下內容

命令:sudo vim ~/.bashrc

# Java set

  export JAVA_HOME=/usr/local/jdk1.8.0_201

  export JRE_HOME=$JAVA_HOME/jre

  export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib

  export PATH=$PATH:$JAVA_HOME/bin:$PATH

  

  # Hadoop set

  export HADOOP_HOME=/usr/local/hadoop-3.0.2

  export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

  export HADOOP_MAPRED_HOME=$HADOOP_HOME

  export HADOOP_COMMON_HOME=$HADOOP_HOME

  export YARN_HOME=$HADOOP_HOME

  export HADOOP_ROOT_LOGGER=INFO,console

  export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

  export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

配置完後刷新環境變量

命令:source ~/.bashrc

 

四、修改Hadoop配置文件

以下配置文件均在/hadoop-3.0.2/etc/hadoop目錄下

①配置workers文件

刪除localhost

添加內容(一行一個)

hadoop-LZW

 

②配置core-site.xml 

<configuration>

     <property>

         <name>fs.checkpoint.period</name>

         <value>3600</value>

     </property>

     <property>

         <name>fs.checkpoint.size</name>

         <value>67108864</value>

     </property>

     <property>

         <name>hadoop.tmp.dir</name>

         <value>file:/usr/local/hadoop-3.0.2/tmp</value>

         <description>Abase for other temporary directories.</description>

     </property>

     <property>

         <name>fs.defaultFS</name>

         <value>hdfs://192.168.191.91:9000</value>

     </property>

 </configuration>

 

③配置hdfs-site.xml

<configuration>

     <property>

         <name>dfs.replication</name>

         <value>1</value>

     </property>

     <property>

         <name>dfs.namenode.name.dir</name>

         <value>file:/usr/local/hadoop-3.0.2/tmp/dfs/name</value>

     </property>

     <property>

         <name>dfs.datanode.data.dir</name>

         <value>file:/usr/local/hadoop-3.0.2/tmp/dfs/data</value>

     </property>

     <property>

         <name>dfs.namenode.secondary.http-address</name>

         <value>192.168.191.91:50090</value>

     </property>

     <property>

         <name>dfs.namenode.http-address</name>

         <value>192.168.191.91:50070</value>

     </property>

     <property>

         <name>dfs.namenode.checkpoint.dir</name>

         <value>file:/usr/local/hadoop-3.0.2/tmp/dfs/checkpoint</value>

     </property>

     <property>

         <name>dfs.name.checkpoint.edits.dir</name>

         <value>file:/usr/local/hadoop-3.0.2/tmp/dfs/edits</value>

     </property>



 </configuration>

 

④配置mapred-site.xml

<configuration>

     <property>

         <name>mapreduce.framework.name</name>

         <value>yarn</value>

     </property>

     <property>

         <name>mapred.job.tarcker</name>

         <value>192.168.191.91:10020</value>

     </property>

     <property>

         <name>mapreduce.jobhistory.webapp.address</name>

         <value>192.168.191.91:19888</value>

     </property>

 </configuration>

 

⑤配置yarn-site.xml

<configuration>



 <!-- Site specific YARN configuration properties -->

     

     <property>

         <name>yarn.resourcemanager.hostname</name>

         <value>192.168.191.91</value>

     </property>

     <property>

         <name>yarn.nodemanager.aux-services</name>

         <value>mapreduce_shuffle</value>

     </property>

     <property>

         <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

         <value>org.apache.hadoop.mapred.ShuffleHandle</value>

     </property>

     <property>

         <name>yarn.resourcemanager.resource-tarcker.address</name>

         <value>192.168.191.91:8025</value>

     </property>

     <property>

         <name>yarn.resourcemanager.scheduler.address</name>

         <value>192.168.191.91:8030</value>

     </property>

     <property>

         <name>yarn.resourcemanager.address</name>

         <value>192.168.191.91:8040</value>

     </property>

     <property>

         <name>yarn.resourcemanager.admin.address</name>

         <value>192.168.191.91:8033</value>

     </property>

     <property>

         <name>yarn.resourcemanager.webapp.address</name>

         <value>192.168.191.91:8088</value>

     </property>



 </configuration>

 

配置log4j.properties

在文件最後面加上log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR

 

以下配置文件均在/hadoop-3.0.2/sbin目錄下,都在文件開頭添加內容

①配置start-dfs.sh

HDFS_DATANODE_USER=master

HDFS_DATANODE_SECURE_USER=hdfs

HDFS_NAMENODE_USER=master

HDFS_SECONDARYNAMENODE_USER=master

 

②配置stop-dfs.sh

HDFS_DATANODE_USER=master

HDFS_DATANODE_SECURE_USER=hdfs

HDFS_NAMENODE_USER=master

HDFS_SECONDARYNAMENODE_USER=master

 

③配置start-yarn.sh

YARN_RESOURCEMANAGER_USER=master

HADOOP_SECURE_DN_USER=yarn

YARN_NODEMANAGER_USER=master

 

④配置stop-yarn.sh

YARN_RESOURCEMANAGER_USER=master

HADOOP_SECURE_DN_USER=yarn

YARN_NODEMANAGER_USER=master

 

五、同步到slave節點的虛擬機

將JDK1.8安裝文件、hadoop-3.0.2安裝文件、環境變量配置文件同步到各個虛擬機中

命令(有幾個slave節點,就分別執行幾次):

Sudo scp -r /usr/local/jdk1.8.0_201 hadoop-LZW:/usr/local/

Sudo scp -r /usr/local/hadoop-3.0.2 hadoop-LZW:/usr/local/

Sudo scp -r ~/.bashrc hadoop-LZW:~/

如果以上命令無效,可以將整個虛擬機的安裝文件都複製到其他電腦上,打開此虛擬機,然後再重新配置一下slave節點的配置即可;如果連接不成功,有可能要重新安裝ssh,發送公鑰給master節點,或slave節點。

 

六、運行測試

命令:start-all.sh

訪問192.191.168.91:8088頁面

訪問192.168.191.91:8042頁面

到此,真分佈式配置完成。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章