hadoop分佈式文件系統

hadoop分佈式文件系統

http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html

實驗環境:

Masterdesk11192.168.122.11

Datanodeserver90192.168.122.190

Server233192.168.122.233

server73192.168.122.173用作之後的在線添加

做好本地解析:

192.168.122.11desk11.example.comdesk11

192.168.122.190server90.example.comserver90

192.168.122.233server233.example.comserver233

192.168.122.173server73.eample.comserver73

1.環境配置

hadoopjava程序所以必須運行於java虛擬上(jdk

準備jdkjdk-6u26-linux-x64.bin

shjdk-6u26-linux-x64.bin

mvjdk1.6.0_26//usr/local/jdk

vim/etc/profile

添加:

exportJAVA_HOME=/usr/local/jdk

exportCLASSPATH=.:$JAVA_HOME/lib

exportPATH=$PATH:$JAVA_HOME/bin

source/etc/profile

如果系統上安裝了默認的javaopenjdk,你需要更新一下:

#alternatives--install/usr/bin/javajava/usr/local/jdk/bin/java2

#alternatives--setjava/usr/local/jdk/bin/java

#java-version

javaversion"1.6.0_32"

Java(TM)SERuntimeEnvironment(build1.6.0_32-b05)

JavaHotSpot(TM)64-BitServerVM(build20.7-b02,mixedmode)

檢測一下java命令位置

whichjava

/usr/local/jdk/bin/java則位置正常

java環境正常!

1.僞分佈式系統(MasterDatanode等節點均在一臺主機上)

選擇Master主機:desk11

yum-yinstallopensshrsync

useradd-u600hadoop#hadoop均已hadoop身份運行

echohadoop|passwd--stdinhadoop

Chownhadoop.hadoop/home/hadoop-R

以下操作均在hadoop身份操作

su-hadoop

建立ssh無密碼驗證(ssh等效性)

ssh-keygent一路回車

ssh-copy-idlocalhost

得到的效果是:

[hadoop@desk11~]$sshlocalhost

Lastlogin:SatAug313:59:582013fromlocalhost

配置hadoop

cdhadoop-1.0.4/conf

vimcore-site.xml

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://desk11:9000</value>

</property>

</configuration>

vimhdfs-site.xml

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>#啓動一個節點

</property>

</configuration>

vimmapred-site.xml

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>desk11:9001</value>

</property>

</configuration>

[hadoop@desk11conf]$vimhadoop-env.sh

exportJAVA_HOME=/usr/local/jdk

如果沒有設置java環境的位置在啓動hadoop時則會報錯:

205053120.png

格式化:

bin/hadoopnamenode-format

正常結果如下:

205145522.png

啓動hadoop

bin/start-all.sh

205320490.png

查看啓動的進程:jps

205400426.png

啓動之後我們可以在web上訪問:

205515972.png

205517709.png

測試:

bin/hadoopfs-putconfwestos#將當前目錄下的conf目錄下的內容上傳到HDFS存於westos目錄中

bin/hadoopfs-ls#查看fs系統中的內容

205926223.png

複製分佈式文件系統的文件到本地:

bin/hadoopfs-getwestostest#hdfs上下載westos的內容到test目錄

210522117.png

看見在當前的目錄下會有test目錄裏面的內容正好是上傳上去conf裏的配置文件;

在文件系統創建目錄:210524984.png

bin/hadoopfs-mkdirwangzi

bin/hadoopfs-ls

bin/hadoopfs-rmrwangzi#刪除hdfs上的目錄

bin/hadoopfs-ls

210526510.png

bin/hadoopjarhadoop-examples-1.0.4.jargrepwestosoutput'dfs[a-z.]+'

#運算測試,在westos內查找以dfs開頭的內容並存到output

運算過程:

13/08/0313:38:27INFOutil.NativeCodeLoader:Loadedthenative-hadooplibrary

13/08/0313:38:27WARNsnappy.LoadSnappy:Snappynativelibrarynotloaded

13/08/0313:38:27INFOmapred.FileInputFormat:Totalinputpathstoprocess:16

13/08/0313:38:27INFOmapred.JobClient:Runningjob:job_201308031321_0001

13/08/0313:38:28INFOmapred.JobClient:map0%reduce0%

13/08/0313:39:28INFOmapred.JobClient:map6%reduce0%

13/08/0313:39:33INFOmapred.JobClient:map12%reduce0%

網頁監測:

可以看出已經有意個job提交了,並且正在運算

210528150.png

210530779.png

運算完成:210532918.png

可見hadoop將任務分成兩個job提交的;

[[email protected]]$bin/hadoopfs-ls

查看output的內容:

210537406.png

關閉hadoop

[[email protected]]$bin/stop-all.sh

[[email protected]]$jps

28027Jps

2.分佈式文件系統:

1jdk環境:

在兩個數據節點server90server233上:

shjdk-6u26-linux-x64.bin

mvjdk1.6.0_26//usr/local/jdk

vim/etc/profile

添加:

exportJAVA_HOME=/usr/local/jdk

exportCLASSPATH=.:$JAVA_HOME/lib

exportPATH=$PATH:$JAVA_HOME/bin

source/etc/profile

檢測一下java命令位置

whichjava

/usr/local/jdk/bin/java則位置正常

java環境正常!

2)。seync搭建:(配置以root身份進行)

Master節點與其他的節點之間的配置是要相同的,並且需要各個節點之間要建立ssh等效性,即任意兩個節點之間的ssh鏈接不能有密碼。

爲方便部署採用裏seync

需要安裝包:sersync2.5_64bit_binary_stable_final.tar.gz

在各節點上:

yum-yinstallrsyncxinetd

Master上:

tarzxfsersync2.5.4_64bit_binary_stable_final.tar.gz

[root@desk11home]#ls

GNU-Linux-x86hadoop

[root@desk11home]#cdGNU-Linux-x86/

[root@desk11GNU-Linux-x86]#ls

confxml.xmlsersync2

[root@desk11GNU-Linux-x86]#vimconfxml.xml

<sersync>

<localpathwatch="/home/hadoop">#同步服務器同步的目錄

<remoteip="192.168.122.190"name="rsync"/>#name爲目標服務器設定的同步名

<remoteip="192.168.122.233"name="rsync"/>

<!--<remoteip="192.168.8.40"name="tongbu"/>-->

</localpath>

在目標服務器上,即在兩個數據節點上;

useradd-u600hadoop#hadoop均已hadoop身份運行

echohadoop|passwd--stdinhadoop

[root@server90~]#vim/etc/rsyncd.conf

uid=hadoop#同步過來的所有的內容的所有者及屬組均爲hadoop

gid=hadoop

maxconnections=36000

usechroot=no

logfile=/var/log/rsyncd.log

pidfile=/var/run/rsyncd.pid

lockfile=/var/run/rsyncd.lock

[rsync]#Master上的name="rsync"保持一至

path=/home/hadoop#同步本地的目錄

comment=testfiles

ignoreerrors=yes

readonly=no

hostsallow=192.168.122.11/24

hostsdeny=*

[root@server90~]#/etc/init.d/xinetdrestart

Stoppingxinetd:[OK]

Startingxinetd:[OK]

rsync--daemon

兩個數據節點上操作完全一樣

Master節點上:

[root@desk11GNU-Linux-x86]#/etc/init.d/xinetdrestart

Stoppingxinetd:[FAILED]

Startingxinetd:[OK]

[root@desk11GNU-Linux-x86]#./sersync2-r-d#-r整體同步-d

後臺運行並檢測同步服務器的數據是否發生變化,只要數據變化就會將變化的數據同步到其他的兩個節點上;

3)分佈式文件系統的配置:

hadoop身份操作:

Master節點上:

[hadoop@desk11conf]$vimmasters#制定Master位置

desk11#注意解析

[hadoop@desk11conf]$vimslaves

server90

server233

[hadoop@desk11conf]$vimhdfs-site.xml

<configuration>

<property>

<name>dfs.replication</name>

<value>2</value>#啓動兩個Datanode

</property>

</configuration>

啓動:

[[email protected]]$bin/hadoopnamenode-format

bin/start-all.sh

210534527.png

211547775.png

可以看見在Master節點上的DataNode進程沒有啓動,這個進程是在兩個數據節點上啓動的;

在數據節點server90server233上:

[hadoop@server90conf]$jps

22978DataNode

23145Jps

23071TaskTracker

[hadoop@server233~]$jps

23225Jps

23150TaskTracker

23055DataNode

測試:

211549568.png

211551321.png

可見節點數變成了兩個:

執行測試測試程序:

[[email protected]]$bin/hadoopjarhadoop-examples-1.0.4.jargrepwestosoutput'dfs[a-z.]+'

13/08/0501:41:10INFOutil.NativeCodeLoader:Loadedthenative-hadooplibrary

13/08/0501:41:10WARNsnappy.LoadSnappy:Snappynativelibrarynotloaded

13/08/0501:41:10INFOmapred.FileInputFormat:Totalinputpathstoprocess:16

13/08/0501:41:11INFOmapred.JobClient:Runningjob:job_201308050135_0001

13/08/0501:41:12INFOmapred.JobClient:map0%reduce0%

13/08/0501:41:39INFOmapred.JobClient:map12%reduce0%

13/08/0501:41:43INFOmapred.JobClient:map25%reduce0%

13/08/0501:42:05INFOmapred.JobClient:map31%reduce0%

13/08/0501:42:08INFOmapred.JobClient:map37%reduce0%

13/08/0501:42:17INFOmapred.JobClient:map43%reduce0%

13/08/0501:42:20INFOmapred.JobClient:map50%reduce0%

監控計算狀態:

[[email protected]]$bin/hadoopdfsadmin-report

ConfiguredCapacity:6209044480(5.78GB)

PresentCapacity:3567787548(3.32GB)

DFSRemaining:3567493120(3.32GB)

DFSUsed:294428(287.53KB)

DFSUsed%:0.01%

Underreplicatedblocks:0

Blockswithcorruptreplicas:0

Missingblocks:0

-------------------------------------------------

Datanodesavailable:2(2total,0dead)

Name:192.168.122.233:50010

DecommissionStatus:Normal

ConfiguredCapacity:3104522240(2.89GB)

DFSUsed:147214(143.76KB)

NonDFSUsed:1320521970(1.23GB)

DFSRemaining:1783853056(1.66GB)

DFSUsed%:0%

DFSRemaining%:57.46%

Lastcontact:MonAug0501:45:38CST2013

Name:192.168.122.190:50010

DecommissionStatus:Normal

ConfiguredCapacity:3104522240(2.89GB)

DFSUsed:147214(143.76KB)

NonDFSUsed:1320734962(1.23GB)

DFSRemaining:1783640064(1.66GB)

DFSUsed%:0%

DFSRemaining%:57.45%

Lastcontact:MonAug0501:45:36CST2013

可以看出兩個Datanode均參與了節點的計算,負載均衡

211835508.png

4hadoop在線添加節點:

1.在新增節點上安裝jdk,並創建相同的hadoop用戶,uid等保持一致

2.conf/slaves文件中添加新增節點的ip或者對應域名

3.建立各節點之間與server73ssh等效性

4.同步masterhadoop所有數據到新增節點上,路徑保持一致

5.在新增節點上啓動服務:

[[email protected]]$bin/hadoop-daemon.shstartdatanode

[[email protected]]$bin/hadoop-daemon.shstarttasktracker

[[email protected]]$jps

1926DataNode

2024TaskTracker

2092Jps

211837492.png

節點數加一了

6.均衡數據:

Master節點上:

[[email protected]]$bin/start-balancer.sh

startingbalancer,loggingto/home/hadoop/hadoop-1.0.4/libexec/../logs/hadoop-hadoop-balancer-desk11.example.com.out

1)如果不執行均衡,那麼cluster會把新的數據都存放在新的datanode,這樣會降低mapred的工作效率

2)設置平衡閾值,默認是10%,值越低各節點越平衡,但消耗時間也更長

[[email protected]]$bin/start-balancer.sh-threshold5

[[email protected]]$bin/hadoopjarhadoop-examples-1.0.4.jargrepwestostest'dfs[a-z.]+'

13/08/0503:03:49INFOutil.NativeCodeLoader:Loadedthenative-hadooplibrary

13/08/0503:03:49WARNsnappy.LoadSnappy:Snappynativelibrarynotloaded

13/08/0503:03:49INFOmapred.FileInputFormat:Totalinputpathstoprocess:16

13/08/0503:03:49INFOmapred.JobClient:Runningjob:job_201308050135_0003

13/08/0503:03:50INFOmapred.JobClient:map0%reduce0%

[[email protected]]$bin/hadoopdfsadmin-report

ConfiguredCapacity:9313566720(8.67GB)

PresentCapacity:5379882933(5.01GB)

DFSRemaining:5378859008(5.01GB)

DFSUsed:1023925(999.93KB)

DFSUsed%:0.02%

Underreplicatedblocks:2

Blockswithcorruptreplicas:0

Missingblocks:0

-------------------------------------------------

Datanodesavailable:3(3total,0dead)

Name:192.168.122.233:50010

DecommissionStatus:Normal

ConfiguredCapacity:3104522240(2.89GB)

DFSUsed:424147(414.21KB)

NonDFSUsed:1321731885(1.23GB)

DFSRemaining:1782366208(1.66GB)

DFSUsed%:0.01%

DFSRemaining%:57.41%

Lastcontact:MonAug0503:04:25CST2013

Name:192.168.122.73:50010#新添加的節點

DecommissionStatus:Normal

ConfiguredCapacity:3104522240(2.89GB)

DFSUsed:195467(190.89KB)

NonDFSUsed:1290097781(1.2GB)

DFSRemaining:1814228992(1.69GB)

DFSUsed%:0.01%

DFSRemaining%:58.44%

Lastcontact:MonAug0503:04:24CST2013

Name:192.168.122.190:50010

DecommissionStatus:Normal

ConfiguredCapacity:3104522240(2.89GB)

DFSUsed:404311(394.83KB)

NonDFSUsed:1321854121(1.23GB)

DFSRemaining:1782263808(1.66GB)

DFSUsed%:0.01%

DFSRemaining%:57.41%

Lastcontact:MonAug0503:04:23CST2013

212132763.png

5hadoop在線刪除datanode節點

[hadoop@desk11conf]$vimmapred-site.xml

添加:

<property>

<name>dfs.hosts.exclude</name>

<value>/home/hadoop/hadoop-1.0.4/conf/datanode-exclude</value>

</property>

創建/home/hadoop/hadoop-1.0.4/conf/datanode-exclude文件,寫入要刪除的主機,一行一個

[hadoop@desk11conf]$echo"server73">\

/home/hadoop/hadoop-1.0.4/conf/datanode-exclude

master上在線刷新節點:

[[email protected]]$bin/hadoopdfsadmin-refreshNodes

此操作會在後臺遷移數據

212134268.png

可以看出在Datanode上的已經有意個節點down掉了;

6hadoop在線刪除tasktracker節點:

master上修改conf/mapred-site.xml

<property>

<name>mapred.hosts.exclude</name>

<value>/home/hadoop/hadoop-1.0.4/conf/trasktracker-exclude</value>

</property>

創建/home/hadoop/hadoop-1.0.4/conf/trasktracker-exclude文件:

touch/home/hadoop/hadoop-1.0.4/conf/trasktracker-exclude

vim/home/hadoop/hadoop-1.0.4/conf/trasktracker-exclude

server73192.168.122.173

刷新節點:

[hadoop@desk11bin]$./hadoopmradmin-refreshNodes

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章