hadoop運行痕跡~~

Hadoop 學習總結之一:HDFS簡介

Hadoop學習總結之二:HDFS讀寫過程解析

Hadoop學習總結之三:Map-Reduce入門

Hadoop學習總結之四:Map-Reduce的過程解析

 

在使用hadoop的時候,可能遇到各種各樣的問題,然而由於hadoop的運行機制比較複雜,因而出現了問題的時候比較難於發現問題。

本文欲通過某種方式跟蹤Hadoop的運行痕跡,方便出現問題的時候可以通過這些痕跡來解決問題。

一、環境的搭建

爲了能夠跟蹤這些運行的痕跡,我們需要搭建一個特殊的環境,從而可以一步步的查看上一節提到的一些關鍵步驟所引起的變化。

我們首先搭建一個擁有一個NameNode(namenode:192.168.1.104),三個DataNode(datanode01:192.168.1.105, datanode02:192.168.1.106, datanode03:192.168.1.107)的Hadoop環境,其中SecondaryNameNode和NameNode運行在同一臺機器上。

對於這四臺機器上的Hadoop,我們需要進行如下相同的配置:

  • NameNode,SeondaryNameNode,JobTracker都應該運行在namenode:192.168.1.104機器上
  • DataNode,TaskTracker,以及生成的Map和Reduce的Task JVM應該運行在datanode01, datanode02, datanode03上
  • 數據共有三份備份
  • HDFS以及Map-Reduce運行的數據放在/data/hadoop/dir/tmp文件夾下

<property>

<name>fs.default.name</name>

<value>hdfs://192.168.1.104:9000</value>

</property>

<property>

<name>mapred.job.tracker</name>

<value>192.168.1.104:9001</value>

</property>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/data/hadoopdir/tmp</value>

<description>A base for other temporary directories.</description>

</property>

然而由於Map-Reduce過程相對複雜,爲了能夠對Map和Reduce的Task JVM進行遠程的調試,從而能一步一步觀察,因而對NameNode和三個DataNode有一些不同的配置:

對於NameNode:

  • 設置mapred.job.reuse.jvm.num.tasks爲-1,使得多個運行於同一個DataNode上的Map和Reduce的Task共用同一個JVM,從而方便對此JVM進行遠程調試,並且不會因爲多個Task JVM監聽同一個遠程調試端口而發生衝突
  • 對於mapred.tasktracker.map.tasks.maximum和mapred.tasktracker.reduce.tasks.maximum的配置以DataNode上的爲準
  • 設置io.sort.mb爲1M(原來爲100M),是爲了在Map階段讓內存中的map output儘快的spill到文件中來,從而我們可以觀察map的輸出
  • 設置mapred.child.java.opts的時候,即設置Task JVM的運行參數,添加遠程調試監聽端口8333

<property>
<name>mapred.job.reuse.jvm.num.tasks</name>
<value>-1</value>
<description></description>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>1</value>
<description></description>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>1</value>
<description></description>
</property>
<property>
<name>io.sort.mb</name>
<value>1</value>
<description></description>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx200m -agentlib:jdwp=transport=dt_socket,address=8883,server=y,suspend=y</value>
<description></description>
</property>

<property>
<name>mapred.job.shuffle.input.buffer.percent</name>
<value>0.001</value>
<description></description>
</property>

<property>
<name>mapred.job.shuffle.merge.percent</name>
<value>0.001</value>
<description></description>
</property>

<property>
<name>io.sort.factor</name>
<value>2</value>
<description></description>
</property>

對於DataNode:

  • 對於datanode01:192.168.1.105,設置同時運行的map task的個數(mapred.tasktracker.map.tasks.maximum)爲1,同時運行的reduce task的個數(mapred.tasktracker.reduce.tasks.maximum)爲0
  • 對於datanode02:192.168.1.106,設置同時運行的map task的個數(mapred.tasktracker.map.tasks.maximum)爲0,同時運行的reduce task的個數(mapred.tasktracker.reduce.tasks.maximum)爲0
  • 對於datanode02:192.168.1.107,設置同時運行的map task的個數(mapred.tasktracker.map.tasks.maximum)爲0,同時運行的reduce task的個數(mapred.tasktracker.reduce.tasks.maximum)爲1
  • 之所以這樣設置,是因爲我們雖然可以控制多個Map task共用同一個JVM,然而我們不能控制Map task和Reduce Task也共用一個JVM。從而當Map task的JVM和Reduce Task的JVM同時在同一臺機器上啓動的時候,仍然會出現監聽遠程調用端口衝突的問題。
  • 經過上面的設置,從而datanode01專門負責運行Map Task,datanode03專門負責運行Reduce Task,而datanode02不運行任何的Task,甚至連TaskTracker也不用啓動了
  • 對於Reduce Task設置mapred.job.shuffle.input.buffer.percent和mapred.job.shuffle.merge.percent爲0.001,從而使得拷貝,合併階段的中間結果都因爲內存設置過小而寫入硬盤,我們能夠看到痕跡
  • 設置io.sort.factor爲2,使得在map task輸出不多的情況下,也能觸發合併。

除了對Map task和Reduce Task進行遠程調試之外,我們還想對NameNode,SecondaryName,DataNode,JobTracker,TaskTracker進行遠程調試,則需要修改一下bin/hadoop文件:

if [ "$COMMAND" = "namenode" ] ; then

CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode'

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_OPTS -agentlib:jdwp=transport=dt_socket,address=8888,server=y,suspend=n"

elif [ "$COMMAND" = "secondarynamenode" ] ; then

CLASS='org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode'

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_SECONDARYNAMENODE_OPTS -agentlib:jdwp=transport=dt_socket,address=8887,server=y,suspend=n"

elif [ "$COMMAND" = "datanode" ] ; then

CLASS='org.apache.hadoop.hdfs.server.datanode.DataNode'

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_DATANODE_OPTS -agentlib:jdwp=transport=dt_socket,address=8886,server=y,suspend=n"

……

elif [ "$COMMAND" = "jobtracker" ] ; then

CLASS=org.apache.hadoop.mapred.JobTracker

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_JOBTRACKER_OPTS -agentlib:jdwp=transport=dt_socket,address=8885,server=y,suspend=n"

elif [ "$COMMAND" = "tasktracker" ] ; then

CLASS=org.apache.hadoop.mapred.TaskTracker

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_TASKTRACKER_OPTS -agentlib:jdwp=transport=dt_socket,address=8884,server=y,suspend=n"

在進行一切實驗之前,我們首先清空/data/hadoopdir/tmp以及logs文件夾。

 

二、格式化HDFS

格式化HDFS需要運行命令:bin/hadoop namenode –format

於是打印出如下的日誌:

10/11/20 19:52:21 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = namenode/192.168.1.104
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.19.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.19 -r 789657; compiled by 'root' on Tue Jun 30 12:40:50 EDT 2009
************************************************************/
10/11/20 19:52:21 INFO namenode.FSNamesystem: fsOwner=admin,sambashare
10/11/20 19:52:21 INFO namenode.FSNamesystem: supergroup=supergroup
10/11/20 19:52:21 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/11/20 19:52:21 INFO common.Storage: Image file of size 97 saved in 0 seconds.
10/11/20 19:52:21 INFO common.Storage: Storage directory /data/hadoopdir/tmp/dfs/name has been successfully formatted.
10/11/20 19:52:21 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at namenode/192.168.1.104
************************************************************/

這個時候在NameNode的/data/hadoopdir/tmp下面出現如下的文件樹形結構:

+- dfs
+- name
+--- current
+---- edits
+---- fsimage
+---- fstime
+---- VERSION
+---image
+---- fsimage

這個時候,DataNode的/data/hadoopdir/tmp中還是空的。

 

二、啓動Hadoop

啓動Hadoop需要調用命令bin/start-all.sh,輸出的日誌如下:

starting namenode, logging to logs/hadoop-namenode-namenode.out

192.168.1.106: starting datanode, logging to logs/hadoop-datanode-datanode02.out

192.168.1.105: starting datanode, logging to logs/hadoop-datanode-datanode01.out

192.168.1.107: starting datanode, logging to logs/hadoop-datanode-datanode03.out

192.168.1.104: starting secondarynamenode, logging to logs/hadoop-secondarynamenode-namenode.out

starting jobtracker, logging to logs/hadoop-jobtracker-namenode.out

192.168.1.106: starting tasktracker, logging to logs/hadoop-tasktracker-datanode02.out

192.168.1.105: starting tasktracker, logging to logs/hadoop-tasktracker-datanode01.out

192.168.1.107: starting tasktracker, logging to logs/hadoop-tasktracker-datanode03.out

從日誌中我們可以看出,此腳本啓動了NameNode, 三個DataNode,SecondaryName,JobTracker以及三個TaskTracker.

下面我們分別從NameNode和三個DataNode中運行jps -l,看看到底運行了那些java程序:

在NameNode中:

 

22214 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode

22107 org.apache.hadoop.hdfs.server.namenode.NameNode

22271 org.apache.hadoop.mapred.JobTracker

在datanode01中:

12580 org.apache.hadoop.mapred.TaskTracker

12531 org.apache.hadoop.hdfs.server.datanode.DataNode

在datanode02中:

10548 org.apache.hadoop.hdfs.server.datanode.DataNode

在datanode03中:

12593 org.apache.hadoop.hdfs.server.datanode.DataNode

12644 org.apache.hadoop.mapred.TaskTracker

同我們上面的配置完全符合。

當啓動了Hadoop以後,/data/hadoopdir/tmp目錄也發生了改變,通過ls -R我們可以看到。

對於NameNode:

  • 在name文件夾中,多了in_use.lock文件,說明NameNode已經啓動了
  • 多了nameseondary文件夾,用於存放SecondaryNameNode的數據

 

.:

dfs

./dfs:

name namesecondary

./dfs/name:

current image in_use.lock

./dfs/name/current:

edits fsimage fstime VERSION

./dfs/name/image:

fsimage

./dfs/namesecondary:

current image in_use.lock

./dfs/namesecondary/current:

edits fsimage fstime VERSION

./dfs/namesecondary/image:

fsimage

對於DataNode:

  • 多了dfs和mapred兩個文件夾
  • dfs文件夾用於存放HDFS的block數據的
  • mapred用於存放Map-Reduce Task任務執行所需要的數據的。

 

.:

dfs mapred

./dfs:

data

./dfs/data:

current detach in_use.lock storage tmp

./dfs/data/current:

dncp_block_verification.log.curr VERSION

./dfs/data/detach:

./dfs/data/tmp:

./mapred:

local

./mapred/local:

 

當然隨着Hadoop的啓動,logs文件夾下也多個很多的日誌:

在NameNode上,日誌有:

  • NameNode的日誌:
    • hadoop-namenode-namenode.log此爲log4j的輸出日誌
    • hadoop-namenode-namenode.out此爲stdout和stderr的輸出日誌
  • SecondaryNameNode的日誌:
    • hadoop-secondarynamenode-namenode.log此爲log4j的輸出日誌
    • hadoop-secondarynamenode-namenode.out此爲stdout和stderr的輸出日誌
  • JobTracker的日誌:
    • hadoop-jobtracker-namenode.log此爲log4j的輸出日誌
    • hadoop-jobtracker-namenode.out此爲stdout和stderr的輸出日誌

在DataNode上的日誌有(以datanode01爲例子):

  • DataNode的日誌
    • hadoop-datanode-datanode01.log此爲log4j的輸出日誌
    • hadoop-datanode-datanode01.out此爲stdout和stderr的輸出日誌
  • TaskTracker的日誌
    • hadoop-tasktracker-datanode01.log此爲log4j的輸出日誌
    • hadoop-tasktracker-datanode01.out此爲stdout和stderr的輸出日誌

下面我們詳細查看這些日誌中的有重要意義的信息:

在hadoop-namenode-namenode.log文件中,我們可以看到NameNode啓動的過程:

Namenode up at: namenode/192.168.1.104:9000

//文件的數量

Number of files = 0

Number of files under construction = 0

//加載fsimage和edits文件形成FSNamesystem

Image file of size 97 loaded in 0 seconds.

Edits file /data/hadoopdir/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.

Image file of size 97 saved in 0 seconds.

Finished loading FSImage in 12812 msecs

//統計block的數量和狀態

Total number of blocks = 0

Number of invalid blocks = 0

Number of under-replicated blocks = 0

Number of over-replicated blocks = 0

//離開safe mode

Leaving safe mode after 12 secs.

//註冊DataNode

Adding a new node: /default-rack/192.168.1.106:50010

Adding a new node: /default-rack/192.168.1.105:50010

Adding a new node: /default-rack/192.168.1.107:50010

在hadoop-secondarynamenode-namenode.log文件中,我們可以看到SecondaryNameNode的啓動過程:

Secondary Web-server up at: 0.0.0.0:50090

//進行Checkpoint的週期

Checkpoint Period :3600 secs (60 min)

Log Size Trigger :67108864 bytes (65536 KB)

//進行一次checkpoint,從NameNode下載fsimage和edits

Downloaded file fsimage size 97 bytes.

Downloaded file edits size 370 bytes.

//加載edit文件,進行合併,將合併後的fsimage保存,我們可以看到fsimage變大了

Edits file /data/hadoopdir/tmp/dfs/namesecondary/current/edits of size 370 edits # 6 loaded in 0 seconds.

Image file of size 540 saved in 0 seconds.

//此次checkpoint結束

Checkpoint done. New Image Size: 540

在hadoop-jobtracker-namenode.log文件中,我們可以看到JobTracker的啓動過程:

JobTracker up at: 9001

JobTracker webserver: 50030

//清除HDFS中的/data/hadoopdir/tmp/mapred/system文件夾,是用於Map-Reduce運行過程中保存數據的

Cleaning up the system directory

//不斷的從TaskTracker收到heartbeat,第一次是註冊TaskTracker

Got heartbeat from: tracker_datanode01:localhost/127.0.0.1:58297

Adding a new node: /default-rack/datanode01

Got heartbeat from: tracker_datanode03:localhost/127.0.0.1:37546

Adding a new node: /default-rack/datanode03

在hadoop-datanode-datanode01.log中,可以看到DataNode的啓動過程:

//格式化DataNode存放block的文件夾

Storage directory /data/hadoopdir/tmp/dfs/data is not formatted.

Formatting ...

//啓動DataNode

Opened info server at 50010

Balancing bandwith is 1048576 bytes/s

Initializing JVM Metrics with processName=DataNode, sessionId=null

//向NameNode註冊此DataNode

dnRegistration = DatanodeRegistration(datanode01:50010, storageID=, infoPort=50075, ipcPort=50020)

New storage id DS-1042573498-192.168.1.105-50010-1290313555129 is assigned to data-node 192.168.1.105:5001

DatanodeRegistration(192.168.1.105:50010, storageID=DS-1042573498-192.168.1.105-50010-1290313555129, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/data/hadoopdir/tmp/dfs/data/current'}

//啓動block scanner

Starting Periodic block scanner.

在hadoop-tasktracker-datanode01.log中,可以看到TaskTracker的啓動過程:

//啓動TaskTracker

Initializing JVM Metrics with processName=TaskTracker, sessionId=

TaskTracker up at: localhost/127.0.0.1:58297

Starting tracker tracker_datanode01:localhost/127.0.0.1:58297

//向JobTracker發送heartbeat

Got heartbeatResponse from JobTracker with responseId: 0 and 0 actions

一個特殊的log文件是hadoop-tasktracker-datanode02.log中,因爲我們設置的最大Map Task數目和最大Reduce Task數據爲0,而報了一個Exception,Can not start task tracker because java.lang.IllegalArgumentException,從而使得datanode02上的TaskTracker沒有啓動起來。

 

當Hadoop啓動起來以後,在HDFS中也創建了一些文件夾/data/hadoopdir/tmp/mapred/system,用來保存Map-Reduce運行時候的共享資源。

三、向HDFS中放入文件

向HDFS中放入文件,需要使用命令:bin/hadoop fs -put inputdata /data/input

放入文件完畢後,我們查看HDFS:bin/hadoop fs -ls /data/input,結果爲:

-rw-r--r-- 3 hadoop supergroup 6119928 2010-11-21 00:47 /data/input/inputdata

這個時候,我們查看DataNode下的/data/hadoopdir/tmp文件夾發生了變化:

  • 在datanode01, datanode02, datanode03上的/data/hadoopdir/tmp/dfs/data/current下面都多瞭如下的block文件
  • 可見block文件被複制了三份

 

.:

dfs mapred

./dfs:

data

./dfs/data:

current detach in_use.lock storage tmp

./dfs/data/current:

blk_2672607439166801630 blk_2672607439166801630_1002.meta dncp_block_verification.log.curr VERSION

./dfs/data/detach:

./dfs/data/tmp:

./mapred:

local

./mapred/local:

在放入文件的過程中,我們可以看log如下:

namenode的hadoop-namenode-namenode.log如下:

//創建/data/input/inputdata

ugi=admin,sambashareip=/192.168.1.104 cmd=create src=/data/input/inputdata dst=null perm=hadoop:supergroup:rw-r--r--

//分配block

NameSystem.allocateBlock: /data/input/inputdata. blk_2672607439166801630_1002

NameSystem.addStoredBlock: blockMap updated: 192.168.1.107:50010 is added to blk_2672607439166801630_1002 size 6119928

NameSystem.addStoredBlock: blockMap updated: 192.168.1.105:50010 is added to blk_2672607439166801630_1002 size 6119928

NameSystem.addStoredBlock: blockMap updated: 192.168.1.106:50010 is added to blk_2672607439166801630_1002 size 6119928

datanode01的hadoop-datanode-datanode01.log如下:

//datanode01從客戶端接收一個block

Receiving block blk_2672607439166801630_1002 src: /192.168.1.104:41748 dest: /192.168.1.105:50010

src: /192.168.1.104:41748, dest: /192.168.1.105:50010, bytes: 6119928, op: HDFS_WRITE, cliID: DFSClient_-1541812792, srvID: DS-1042573498-192.168.1.105-50010-1290313555129, blockid: blk_2672607439166801630_1002

PacketResponder 2 for block blk_2672607439166801630_1002 terminating

datanode02的hadoop-datanode-datanode02.log如下:

//datanode02從datanode01接收一個block

Receiving block blk_2672607439166801630_1002 src: /192.168.1.105:60266 dest: /192.168.1.106:50010

src: /192.168.1.105:60266, dest: /192.168.1.106:50010, bytes: 6119928, op: HDFS_WRITE, cliID: DFSClient_-1541812792, srvID: DS-1366730865-192.168.1.106-50010-1290313543717, blockid: blk_2672607439166801630_1002

PacketResponder 1 for block blk_2672607439166801630_1002 terminating

datanode03的hadoop-datanode-datanode03.log如下:

//datanode03從datanode02接收一個block

Receiving block blk_2672607439166801630_1002 src: /192.168.1.106:58899 dest: /192.168.1.107:50010

src: /192.168.1.106:58899, dest: /192.168.1.107:50010, bytes: 6119928, op: HDFS_WRITE, cliID: DFSClient_-1541812792, srvID: DS-765014609-192.168.1.107-50010-1290313555841, blockid: blk_2672607439166801630_1002

PacketResponder 0 for block blk_2672607439166801630_1002 terminating

Verification succeeded for blk_2672607439166801630_1002

 

四、運行一個Map-Reduce程序

運行Map-Reduce函數,需要運行命令:bin/hadoop jar hadoop-0.19.2-examples.jar wordcount /data/input /data/output

爲了能夠觀察Map-Reduce一步步運行的情況,我們首先遠程調試JobTracker,將斷點設置在JobTracker.submitJob函數中。

按照我們上一篇文章討論的那樣,DFSClient向JobTracker提交任務之前,會將任務運行所需要的三類文件放入HDFS,從而可被JobTracker和TaskTracker得到:

  • 運行的jar文件:job.jar
  • 運行所需要的input split的信息:job.split
  • 運行所需的配置:job.xml

當Map-Reduce程序停在JobTracker.submitJob函數中的時候,讓我們查看HDFS中有如下的變化:

bin/hadoop fs -ls /data/hadoopdir/tmp/mapred/system

其中多了一個文件夾job_201011202025_0001,這是當前運行的Job的ID,在這個文件夾中有三個文件:

bin/hadoop fs -ls /data/hadoopdir/tmp/mapred/system/job_201011202025_0001

Found 3 items

-rw-r--r-- /data/hadoopdir/tmp/mapred/system/job_201011202025_0001/job.jar

-rw-r--r-- /data/hadoopdir/tmp/mapred/system/job_201011202025_0001/job.split

-rw-r--r-- /data/hadoopdir/tmp/mapred/system/job_201011202025_0001/job.xml

現在我們可以斷開對JobTracker的遠程調試。

在JobTracker.submitJob的函數中,會讀取這些上傳到HDFS的文件,從而將Job拆分成Map Task和Reduce Task。

當TaskTracker通過heartbeat向JobTracker請求一個Map Task或者Reduce Task來運行,按照我們上面的配置,顯然datanode01會請求Map Task來執行,而datanode03會申請Reduce Task來執行。

 

下面我們首先來看datanode01上Map Task的執行過程:

當TaskTracker得到一個Task的時候,它會調用TaskTracker.localizeJob將job運行的三個文件從HDFS中拷貝到本地文件夾,然後調用TaskInProgress.localizeTask創建Task運行的本地工作目錄。

我們來遠程調試datanode01上的TaskTracker,分別將斷點設在localizeJob和localizeTask函數中,當程序停在做完localizeTask後,我們來看datanode01上的/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache下多了一個文件夾

job_201011202025_0001,在此文件夾下面有如下的結構:

datanode01:/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache/job_201011202025_0001$ ls -R

.:

attempt_201011202025_0001_m_000000_0 attempt_201011202025_0001_m_000003_0 jars job.xml work

./attempt_201011202025_0001_m_000000_0:

job.xml split.dta work

./attempt_201011202025_0001_m_000000_0/work:

./attempt_201011202025_0001_m_000003_0:

pid work

./attempt_201011202025_0001_m_000003_0/work:

tmp

./attempt_201011202025_0001_m_000003_0/work/tmp:

./jars:

job.jar META-INF org

./work:

其中,job.xml, job.jar,split.dta爲配置文件和運行jar以及input split,jars文件夾下面爲job.jar的解壓縮。

接下來datanode01要創建Child JVM來執行Task,這時我們在datanode01上運行ps aux | grep java,可以發現各有一個新的JVM被創建:

/bin/java

……

-Xmx200m -agentlib:jdwp=transport=dt_socket,address=8883,server=y,suspend=y

……

org.apache.hadoop.mapred.Child

127.0.0.1 58297

attempt_201011202025_0001_m_000003_0 2093922206

從JVM的參數我們可以看出,這是一個map任務。從上面的文件我們可以看出,其實此TaskTracker已經在同一個Child JVM裏面運行了兩個map task,其中一個是attempt_201011202025_0001_m_000003_0,這個沒有input split,後來發現他是一個job setup task,而另一個是attempt_201011202025_0001_m_000000_0,是一個真正處理數據的map task,當然如果需要處理的數據量足夠大,會有多個處理數據的map task被運行。

我們可以對Child JVM進行遠程調試,把斷點設在MapTask.run函數中,從上一篇文章中我們知道,map的結果一開始都是保存在buffer中的,當數據量足夠大,則spill到硬盤中,形成spill文件,在map task結束之前,我們查看attempt_201011202025_0001_m_000000_0文件夾,我們可以看到,大量的spill文件已經生成:

datanode01:/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache/job_201011202025_0001/attempt_201011202025_0001_m_000000_0$ ls -R

.:

job.xml output split.dta work

./output:

spill0.out spill16.out spill22.out spill29.out spill35.out spill41.out spill48.out spill54.out spill60.out spill67.out spill73.out spill7.out

spill10.out spill17.out spill23.out spill2.out spill36.out spill42.out spill49.out spill55.out spill61.out spill68.out spill74.out spill80.out

spill11.out spill18.out spill24.out spill30.out spill37.out spill43.out spill4.out spill56.out spill62.out spill69.out spill75.out spill81.out

spill12.out spill19.out spill25.out spill31.out spill38.out spill44.out spill50.out spill57.out spill63.out spill6.out spill76.out spill82.out

spill13.out spill1.out spill26.out spill32.out spill39.out spill45.out spill51.out spill58.out spill64.out spill70.out spill77.out spill83.out

spill14.out spill20.out spill27.out spill33.out spill3.out spill46.out spill52.out spill59.out spill65.out spill71.out spill78.out spill8.out

spill15.out spill21.out spill28.out spill34.out spill40.out spill47.out spill53.out spill5.out spill66.out spill72.out spill79.out spill9.out

./work:

tmp

./work/tmp:

當整個map task結束後,所有的spill文件會合併成一個文件,這時候我們再查看attempt_201011202025_0001_m_000000_0文件夾:

datanode01:/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache/job_201011202025_0001/attempt_201011202025_0001_m_000000_0$ ls -R
.:
job.xml output split.dta work

./output:
file.out file.out.index

./work:
tmp

./work/tmp:

當然如果有多個map task處理數據,就會生成多個file.out,在本例子中,一共只有兩個map task處理數據,所以最後的結果爲:

datanode01:/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache/job_201011202025_0001$ ls -R attempt_201011202025_0001_m_00000*

attempt_201011202025_0001_m_000000_0:

job.xml output split.dta work

attempt_201011202025_0001_m_000000_0/output:

file.out file.out.index

attempt_201011202025_0001_m_000000_0/work:

tmp

attempt_201011202025_0001_m_000000_0/work/tmp:

attempt_201011202025_0001_m_000001_0:

job.xml output split.dta work

attempt_201011202025_0001_m_000001_0/output:

file.out file.out.index

attempt_201011202025_0001_m_000001_0/work:

tmp

attempt_201011202025_0001_m_000001_0/work/tmp:

attempt_201011202025_0001_m_000003_0:

pid work

attempt_201011202025_0001_m_000003_0/work:

tmp

attempt_201011202025_0001_m_000003_0/work/tmp:

 

然後我們再來看datanode03上reduce task的運行情況:

我們同樣遠程調試datanode03上的TaskTracker,將斷點設在localizeJob和localizeTask函數中,當程序停在做完localizeTask後,我們來看datanode03上的/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache下也多了一個文件夾job_201011202025_0001,在此文件夾下面有如下的結構:

datanode03:/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache/job_201011202025_0001$ ls -R attempt_201011202025_0001_r_00000*
attempt_201011202025_0001_r_000000_0:
job.xml work

attempt_201011202025_0001_r_000000_0/work:
tmp

attempt_201011202025_0001_r_000000_0/work/tmp:

attempt_201011202025_0001_r_000002_0:
pid work

attempt_201011202025_0001_r_000002_0/work:
tmp

attempt_201011202025_0001_r_000002_0/work/tmp:

上面的兩個Reduce Task中,attempt_201011202025_0001_r_000002_0是一個job setup task,真正處理數據的是attempt_201011202025_0001_r_000000_0。

接下來datanode03要創建Child JVM來執行Task,這時我們在datanode03上運行ps aux | grep java,可以發現各有一個新的JVM被創建:

/bin/java

……

-Xmx200m -agentlib:jdwp=transport=dt_socket,address=8883,server=y,suspend=y -

……

org.apache.hadoop.mapred.Child

127.0.0.1 37546

attempt_201011202025_0001_r_000002_0 516504201

從JVM的參數我們可以看出,這是一個map任務。

從上一篇文章中我們知道,Reduce Task包括三個過程:copy,sort,reduce

拷貝過程即將所有的map結果複製到reduce task的本地

datanode03:/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache/job_201011202025_0001/attempt_201011202025_0001_r_000000_0$ ls -R

.:

job.xml output pid work

./output:

map_0.out map_1.out map_2.out map_3.out

./work:

tmp

./work/tmp:

如圖所示,如果共有4個map task,則共拷貝到本地4個map.out。

在拷貝的過程中,有一個背後的線程會對已經拷貝到本地的map.out進行預先的合併,形成map.merged文件,合併的規則是按照io.sort.factor來進行合併,對於我們的配置就是兩兩合併,下面我們看到的就是map_2.out和map_3.out合併成map_3.out.merged,在另外兩個還沒有合併的時候,拷貝過程結束了,則背後的合併進程也就結束了。

datanode03:/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache/job_201011202025_0001/attempt_201011202025_0001_r_000000_0$ ls -R
.:
job.xml output pid work

./output:
map_0.out map_1.out map_3.out.merged

./work:
tmp

./work/tmp:

sort過程就是將拷貝過來的map輸出合併並排序,也是按照io.sort.factor來進行合併,也即兩兩合併。下面我們看到的就是map_0.out和map_1.out合併爲一個intermediate.1,加上另外的map_3.out.merged,數目已經小於io.sort.factor了,於是不再合併。

datanode03:/data/hadoopdir/tmp/mapred/local/attempt_201011202025_0001_r_000000_0$ ls -r

intermediate.1

datanode03:/data/hadoopdir/tmp/mapred/local/taskTracker/jobcache/job_201011202025_0001/attempt_201011202025_0001_r_000000_0$ ls -R
.:
job.xml output pid work

./output:
map_3.out.merged

./work:
tmp

./work/tmp:

reduce的過程就是循環調用reducer的reduce函數,將結果輸出到HDFS中。

namenode:/data/hadoop-0.19.2$ bin/hadoop fs -ls /data/output

Found 2 items

/data/output/_logs

/data/output/part-00000

 

當然我們通過log,也可以看到Map-Reduce的運行過程:

命令行輸出的日誌如下:

namenode:/data/hadoop-0.19.2$ bin/hadoop jar hadoop-0.19.2-examples.jar wordcount /data/input /data/output

10/11/22 07:38:44 INFO mapred.FileInputFormat: Total input paths to process : 4

10/11/22 07:38:45 INFO mapred.JobClient: Running job: job_201011202025_0001

10/11/22 07:38:46 INFO mapred.JobClient: map 0% reduce 0%

10/11/22 07:39:14 INFO mapred.JobClient: map 25% reduce 0%

10/11/22 07:39:23 INFO mapred.JobClient: map 50% reduce 0%

10/11/22 07:39:27 INFO mapred.JobClient: map 75% reduce 0%

10/11/22 07:39:30 INFO mapred.JobClient: map 100% reduce 0%

10/11/22 07:39:31 INFO mapred.JobClient: map 100% reduce 8%

10/11/22 07:39:36 INFO mapred.JobClient: map 100% reduce 25%

10/11/22 07:39:40 INFO mapred.JobClient: map 100% reduce 100%

10/11/22 07:39:41 INFO mapred.JobClient: Job complete: job_201011202025_0001

10/11/22 07:39:41 INFO mapred.JobClient: Counters: 16

10/11/22 07:39:41 INFO mapred.JobClient: File Systems

10/11/22 07:39:41 INFO mapred.JobClient: HDFS bytes read=61199280

10/11/22 07:39:41 INFO mapred.JobClient: HDFS bytes written=534335

10/11/22 07:39:41 INFO mapred.JobClient: Local bytes read=74505214

10/11/22 07:39:41 INFO mapred.JobClient: Local bytes written=81308914

10/11/22 07:39:41 INFO mapred.JobClient: Job Counters

//四個map,一個reduce

10/11/22 07:39:41 INFO mapred.JobClient: Launched reduce tasks=1

10/11/22 07:39:41 INFO mapred.JobClient: Launched map tasks=4

10/11/22 07:39:41 INFO mapred.JobClient: Data-local map tasks=4

10/11/22 07:39:41 INFO mapred.JobClient: Map-Reduce Framework

10/11/22 07:39:41 INFO mapred.JobClient: Reduce input groups=37475

10/11/22 07:39:41 INFO mapred.JobClient: Combine output records=351108

10/11/22 07:39:41 INFO mapred.JobClient: Map input records=133440

10/11/22 07:39:41 INFO mapred.JobClient: Reduce output records=37475

10/11/22 07:39:41 INFO mapred.JobClient: Map output bytes=31671148

10/11/22 07:39:41 INFO mapred.JobClient: Map input bytes=24479712

10/11/22 07:39:41 INFO mapred.JobClient: Combine input records=2001312

10/11/22 07:39:41 INFO mapred.JobClient: Map output records=1800104

10/11/22 07:39:41 INFO mapred.JobClient: Reduce input records=149900

在namenode的hadoop-jobtracker-namenode.log中,我們可以看到JobTracker的運行情況:

//創建一個Job,分成四個map task

JobInProgress: Input size for job job_201011220735_0001 = 24479712

JobInProgress: Split info for job:job_201011220735_0001

JobInProgress: tip:task_201011220735_0001_m_000000 has split on node:/default-rack/datanode02

JobInProgress: tip:task_201011220735_0001_m_000000 has split on node:/default-rack/datanode01

JobInProgress: tip:task_201011220735_0001_m_000000 has split on node:/default-rack/datanode03

JobInProgress: tip:task_201011220735_0001_m_000001 has split on node:/default-rack/datanode03

JobInProgress: tip:task_201011220735_0001_m_000001 has split on node:/default-rack/datanode01

JobInProgress: tip:task_201011220735_0001_m_000001 has split on node:/default-rack/datanode02

JobInProgress: tip:task_201011220735_0001_m_000002 has split on node:/default-rack/datanode02

JobInProgress: tip:task_201011220735_0001_m_000002 has split on node:/default-rack/datanode01

JobInProgress: tip:task_201011220735_0001_m_000002 has split on node:/default-rack/datanode03

JobInProgress: tip:task_201011220735_0001_m_000003 has split on node:/default-rack/datanode01

JobInProgress: tip:task_201011220735_0001_m_000003 has split on node:/default-rack/datanode02

JobInProgress: tip:task_201011220735_0001_m_000003 has split on node:/default-rack/datanode03

 

//datanode01通過heartbeat向JobTracker申請運行一個job setup task

JobTracker: Adding task 'attempt_201011220735_0001_m_000005_0' to tip task_201011220735_0001_m_000005, for tracker 'tracker_datanode01:localhost/127.0.0.1:48339'

JobTracker: tracker_datanode01:localhost/127.0.0.1:48339 -> LaunchTask: attempt_201011220735_0001_m_000005_0

JobInProgress: Task 'attempt_201011220735_0001_m_000005_0' has completed task_201011220735_0001_m_000005 successfully.

 

//datanode01向JobTracker請求運行第一個map task

JobTracker: Adding task 'attempt_201011220735_0001_m_000000_0' to tip task_201011220735_0001_m_000000, for tracker 'tracker_datanode01:localhost/127.0.0.1:48339'

JobInProgress: Choosing data-local task task_201011220735_0001_m_000000

JobTracker: tracker_datanode01:localhost/127.0.0.1:48339 -> LaunchTask: attempt_201011220735_0001_m_000000_0

JobInProgress: Task 'attempt_201011220735_0001_m_000000_0' has completed task_201011220735_0001_m_000000 successfully.

 

//datanode01向JobTracker請求運行第二個map task

JobTracker: Adding task 'attempt_201011220735_0001_m_000001_0' to tip task_201011220735_0001_m_000001, for tracker 'tracker_datanode01:localhost/127.0.0.1:48339'

JobInProgress: Choosing data-local task task_201011220735_0001_m_000001

JobTracker: tracker_datanode01:localhost/127.0.0.1:48339 -> LaunchTask: attempt_201011220735_0001_m_000001_0

JobInProgress: Task 'attempt_201011220735_0001_m_000001_0' has completed task_201011220735_0001_m_000001 successfully.

 

//datanode01向JobTracker請求運行第三個map task

JobTracker: Adding task 'attempt_201011220735_0001_m_000002_0' to tip task_201011220735_0001_m_000002, for tracker 'tracker_datanode01:localhost/127.0.0.1:48339'

JobInProgress: Choosing data-local task task_201011220735_0001_m_000002

JobTracker: tracker_datanode01:localhost/127.0.0.1:48339 -> LaunchTask: attempt_201011220735_0001_m_000002_0

JobInProgress: Task 'attempt_201011220735_0001_m_000002_0' has completed task_201011220735_0001_m_000002 successfully.

 

//datanode01向JobTracker請求運行第四個map task

JobTracker: Adding task 'attempt_201011220735_0001_m_000003_0' to tip task_201011220735_0001_m_000003, for tracker 'tracker_datanode01:localhost/127.0.0.1:48339'

JobInProgress: Choosing data-local task task_201011220735_0001_m_000003

JobTracker: tracker_datanode01:localhost/127.0.0.1:48339 -> LaunchTask: attempt_201011220735_0001_m_000003_0

JobTracker: Got heartbeat from: tracker_datanode01:localhost/127.0.0.1:48339 (initialContact: false acceptNewTasks: true) with responseId: 39

JobInProgress: Task 'attempt_201011220735_0001_m_000003_0' has completed task_201011220735_0001_m_000003 successfully.



//datanode03向JobTracker申請運行一個commit task

JobTracker: Adding task 'attempt_201011220735_0001_r_000000_0' to tip task_201011220735_0001_r_000000, for tracker 'tracker_datanode03:localhost/127.0.0.1:44118'

JobTracker: tracker_datanode03:localhost/127.0.0.1:44118 -> LaunchTask: attempt_201011220735_0001_r_000000_0

JobTracker: tracker_datanode03:localhost/127.0.0.1:44118 -> CommitTaskAction: attempt_201011220735_0001_r_000000_0

JobInProgress: Task 'attempt_201011220735_0001_r_000000_0' has completed task_201011220735_0001_r_000000 successfully.

 

//datanode03向JobTracker申請運行一個reduce task

JobTracker: Adding task 'attempt_201011220735_0001_r_000001_0' to tip task_201011220735_0001_r_000001, for tracker 'tracker_datanode03:localhost/127.0.0.1:44118'

JobTracker: tracker_datanode03:localhost/127.0.0.1:44118 -> LaunchTask: attempt_201011220735_0001_r_000001_0

JobInProgress: Task 'attempt_201011220735_0001_r_000001_0' has completed task_201011220735_0001_r_000001 successfully.

 

JobInProgress: Job job_201011220735_0001 has completed successfully.

同樣,在datanode01的hadoop-tasktracker-datanode01.log可以看到TaskTracker的運行過程。

在datanode01的logs/userlogs下面,我們可以看到爲了運行map task所生成的Child JVM打印出的log,每個map task一個文件夾,在本例中,由於多個map task共用一個JVM,所以只輸出了一組log文件

datanode01:/data/hadoop-0.19.2/logs/userlogs$ ls -R

.:

attempt_201011220735_0001_m_000000_0 attempt_201011220735_0001_m_000002_0 attempt_201011220735_0001_m_000005_0

attempt_201011220735_0001_m_000001_0 attempt_201011220735_0001_m_000003_0

./attempt_201011220735_0001_m_000000_0:

log.index

./attempt_201011220735_0001_m_000001_0:

log.index

./attempt_201011220735_0001_m_000002_0:

log.index

./attempt_201011220735_0001_m_000003_0:

log.index

./attempt_201011220735_0001_m_000005_0:

log.index stderr stdout syslog

同樣,在datanode03的hadoop-tasktracker-datanode03.log可以看到TaskTracker運行的過程。

在datanode03的logs/users下面,也有一組文件夾,每個reduce task一個文件夾,也是多個reduce task共用一個JVM:

datanode03:/data/hadoop-0.19.2/logs/userlogs$ ls -R

.:

attempt_201011220735_0001_r_000000_0 attempt_201011220735_0001_r_000001_0

./attempt_201011220735_0001_r_000000_0:

log.index stderr stdout syslog

./attempt_201011220735_0001_r_000001_0:

log.index

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章