目前有6臺centos環境,ip分別192.168.1.121,192.168.1.125,192.168.1.160,192.168.1.157,192.168.1.158,192.168.1.160,規劃192.168.1.160機器爲NameNode,其餘5臺均爲DataNode,接下來執行以下步驟搭建
1. 首先確保已安裝JDK環境,並配置好JAVA_HOME(此處省略搭建過程)
2.編輯6臺機器的hosts文件 vim /etc/hosts
192.168.1.160 hadoop1
192.168.1.158 hadoop2
192.168.1.157 hadoop3
192.168.1.150 hadoop4
192.168.1.121 hadoop5
192.168.1.125 hadoop6
3.配置ssh環境無密碼登錄
vim /etc/ssh/sshd_config
找到以下內容,並去掉註釋符”#“
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
如果修改了配置文件需要重啓sshd服務 (需要root權限)
service sshd restart
假設配置192.168.1.160 與192.168.1.150無密碼登錄
1) 在160上執行 ssh-keygen -t rsa ,此時會在/root/.ssh目錄生成 id_rsa id_rsa.pub兩個文件
2) cd /root/.ssh/
3)cat id_rsa.pub>>authorized_keys
4)在150上執行 ssh-keygen -t rsa ,此時會在150的/root/.ssh目錄生成 id_rsa id_rsa.pub兩個文件,同時執 行2),3)步驟
5) 在160上執行 scp id_rsa.pub [email protected]:/root/.ssh/h160.pub
6) 在150上執行 cat h160.pub>>authorized_keys
7)此時可以發現在160上執行 ssh 192.168.1.150是不需要密碼的
8)同樣按5-7步驟在150上執行,實現了無密碼登錄
4.在以上6臺機器中新建/home/hadoopCluster目錄,上傳hadoop安裝包到/home/hadoopCluster目錄,
1)tar -zxvf hadoop-2.5.2.tar.gz
2) cd /home/hadoopCluster/hadoop-2.5.2/etc/hadoop
3)vim core-site.xml
<configuration>
<property>
<name > hadoop.tmp.dir </name>
<value > /home/hadoop/tmp </value>
<description> Abase for other temporary directories. </description>
</property>
<property >
<name > fs.defaultFS </name>
<value>hdfs://hadoop1:9000</value>
</property>
<property >
<name > io.file.buffer.size </name>
<value > 4096 </value>
</property>
</configuration>
注意紅色部分hdfs:前面不要有空格,不然可能會導致啓動失敗
4.vim yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop1:8088</value>
</property>
</configuration>
5)vim mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>hadoop1:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop1:19888</value>
</property>
</configuration>
6) vim hdfs-site.xml
<configuration>
<property>
<name>dfs.nameservices</name>
<value>hadoop-cluster1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoopCluster/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoopCluster/dfs/data</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>hadoop1:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
7)在160上執行scp -rf /home/hadoopCluster/* [email protected]:/home/hadoopCluster,同時拷貝到其它4 臺機器
8)進入 hadoop-2.5.2 目錄 分別在6臺機器執行 ./bin/hdfs namenode -format
9)啓動 sbin/start-dfs.sh
10)啓動 sbin/start-yarn.sh
11)訪問http://192.168.1.160:8088/
12)訪問http://192.168.1.160:50070/dfshealth.html#tab-datanode
至此hadoop集羣環境搭建完畢