centos 6.5搭建hadoop2.2.0

由於測試學習用,所以安裝三個虛擬機:

s1=192.198.56.101
s1=192.198.56.102
s1=192.198.56.103

修改hosts文件:
#vim /etc/hosts //加入最下面

192.168.56.101  hadoop1
192.168.56.102  hadoop2
192.168.56.103  hadoop3

1,(101,102,103)上安裝jdk:
下載地址:http://download.chinaunix.net/down.php?id=33931&ResourceID=61&site=1
#cd /usr/local/
#wget -c http://download.chinaunix.net/down.php?id=33931&ResourceID=61&site=1
#chmod +x ./jdk-6u26-dlj-linux-i586.bin
#./jdk-6u26-dlj-linux-i586.bin

看到lucese協議,擼到底輸入yes,或者直接q,然後輸入yes,等待完成


2,(101,102,103)增加環境變量:

#vim + /etc/profile	//文件最底部加入

export JAVA_HOME=/usr/local/jdk1.6.0_26/
export CLASSPATH=.:$JAVA_HOME/lib.tools.jar
export PATH=$JAVA_HOME/bin:$PATH
#source /ect/profile


3,配置ssh key免密碼登錄:

(1)#ssh-keygen -t rsa	//一路回車即可,在三臺機器上都要執行

(2)然後將每臺服務器上的id_rsa.pub均拷貝到其他服務上命名爲authorized_keys

(3)在101服務器上:
	#scp ~/root/.ssh/id_rsa.pub [email protected]:/root/.ssh/authorized_keys
	#scp ~/root/.ssh/id_rsa.pub [email protected]:/root/.ssh/authorized_keys	

(4)到102和103服務器上:
	#cd ~/.ssh/
	#cat id_rsa.pub >> authorized_keys

(5)將102服務器上的~/root/.ssh/id_rsa.pub,追加到101和103服務器的~/root/.ssh/authorized_keys最下面,注意是追加不是覆蓋

(6)到101和103服務器上:
	#cd ~/.ssh/
	#cat id_rsa.pub >> authorized_keys

(7)將103服務器上的~/root/.ssh/id_rsa.pub,追加到102和101服務器的~/root/.ssh/authorized_keys最下面,注意是追加不是覆蓋

(8)到102和101服務器上:
	#cd ~/.ssh/
	#cat id_rsa.pub >> authorized_keys

做完上述步驟之後,三臺服務器均可以免密碼互相登錄了。


4,(101)安裝hadoop:

#cd /usr/local/
#wget -c http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz

#tar zxvf hadoop-2.2.0.tar.gz
#cd hadoop-2.2.0/etc/hadoop/
#vim hadoop-env.sh 

//找到export JAVA_HOME=${JAVA_HOME}複製一行,按兩次yy
注視掉一行,
將未註釋的改爲:export JAVA_HOME=/usr/local/jdk1.6.0_26/


5,編輯core-site.xml文件:

#vim core-site.xml	//將下面內容加入文件的<configuration>這裏</configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.2.0/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.hduser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.groups</name>
<value>*</value>
</property>

6,編輯mapred-site.xml文件:
#vim mapred-site.xml	// 將下面內容加入文件的<configuration>這裏</configuration>
<property>
<name>mapred.job.tracker</name>
<value>hadoop1:9001</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop1:19888</value>
</property>


7,編輯hdfs-site.xml

#vim hdfs-site.xml	// 將下面內容加入文件的<configuration>這裏</configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop-2.2.0/tmp/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop-2.2.0/tmp/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>


8,編輯yarn-site.xml文件:

#vim yarn-site.xml	//將下面內容加入文件的<configuration>這裏</configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop1:8088</value>
</property>


9,編輯slaves文件:

#vim slaves	//改爲如下內容
hadoop1
hadoop2
hadoop3

10,複製hadoo到其他兩臺機器:

#scp /usr/local/hadoop-2.2.0 [email protected]:/usr/local/
#scp /usr/local/hadoop-2.2.0 [email protected]:/usr/local/


11,在三臺服務器上修改環境變量:
#vim + /etc/profile	//在末尾追加如下內容
export HADOOP_HOME=/usr/local/hadoop-2.2.0
export PATH=.:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$PATH
#source /etc/profile

12,在三臺服務器上創建目錄並給予讀寫權限:

#mkdir /usr/local/hadoop-2.2.0/tmp /usr/local/hadoop-2.2.0/tmp/hdfs /usr/local/hadoop-2.2.0/tmp/hdfs/name /usr/local/hadoop-2.2.0/tmp/hdfs/data
#chmod -R 0777 /usr/local/hadoop-2.2.0/tmp /usr/local/hadoop-2.2.0/tmp/hdfs /usr/local/hadoop-2.2.0/tmp/hdfs/name /usr/local/hadoop-2.2.0/tmp/hdfs/data

需要注意jdk內容,我下載的jdk不知道爲何總少這幾個包,只能手動壓,否則導致hadoop無法啓動:

1,/usr/local/jdk1.6.0_26/jre/lib/jsse.jar	/如果沒有,則用//usr/local/jdk1.6.0_26/bin/unpack200	 jsse.pack jsse.jar
2,/usr/local/jdk1.6.0_26/lib/tools.jar	/如果沒有,則用//usr/local/jdk1.6.0_26/bin/unpack200	 tools.pack tools.jar
3,/usr/local/jdk1.6.0_26/jre/lib/rt.jar	/如果沒有,則用//usr/local/jdk1.6.0_26/bin/unpack200	 rt.pack rt.jar

13,安裝完成:
然後在主服務器(101)
#hadoop namenode -format	//格式化name節點
#/usr/local/hadoop/sbin/start-all.sh 	//啓動所有

#jps	//查看進程
32029 SecondaryNameNode
31866 NameNode
32164 ResourceManager
32655 Jps

# hdfs dfsadmin -report	//查看hdfs報考
Configured Capacity: 14184103936 (13.21 GB)
Present Capacity: 8253059072 (7.69 GB)
DFS Remaining: 8167120896 (7.61 GB)
DFS Used: 85938176 (81.96 MB)
DFS Used%: 1.04%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 192.168.10.123:50010 (hadoop3)
Hostname: hadoop3
Decommission Status : Normal
Configured Capacity: 7092051968 (6.60 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2965458944 (2.76 GB)
DFS Remaining: 4126568448 (3.84 GB)
DFS Used%: 0.00%
DFS Remaining%: 58.19%
Last contact: Thu Feb 20 16:58:47 CST 2014


Name: 192.168.10.110:50010 (hadoop2)
Hostname: hadoop2
Decommission Status : Normal
Configured Capacity: 7092051968 (6.60 GB)
DFS Used: 85913600 (81.93 MB)
Non DFS Used: 2965585920 (2.76 GB)
DFS Remaining: 4040552448 (3.76 GB)
DFS Used%: 1.21%
DFS Remaining%: 56.97%
Last contact: Thu Feb 20 17:00:32 CST 2014

//啓動成功之後試試
# hadoop fs -mkdir /test/
# hadoop fs -ls /
drwxr-xr-x   - root supergroup          0 2014-02-20 17:13 /test

成功 ^_^了,上傳一個文件試試
#hadoop fs -put CentOS-6.5-x86_64-LiveCD.iso /test/
#hadoop fs -ls /test/
Found 1 items
-rw-r--r--   1 root supergroup  680525824 2014-02-20 17:37 /test/CentOS-6.5-x86_64-LiveCD.iso


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章