簡介:
hadoop是java寫的,所以,運行hadoop經常也值直接支持java。這讓我們這種不熟悉java的程序員很是心碎啊。還好,Doug Cutting大神也沒有直接放棄非java程序員這塊龐大的用戶需求,提供了各種各樣的接口給那些不熟悉java的程序員使用。下面要介紹的是各種接口中的一種:用hadoop-streaming來運行非java的各式map-reduce程序。
示例:
用我剛開始學hadoop的時候的一個小程序來做例子吧,我們需要統計以下一段話中各個單詞出現的次數,例如,我們有這麼小小的一段單詞:
wo wo wo
shi
yi ke
xiao xiao de shi tou
wo hai shi zhe yang
我們把它存在 /usr/local/hadoop/book目錄下,隨便起個名字存儲起來。然後先寫一個map程序,將所有的單詞分開並統計出來,腳本如下:
#!/usr/bin/env python
import sys
for line in sys.stdin:
words=line.strip().split()
for word in words:
print '%s\t%s' %(word,1)
說明一下#! /usr/bin/env python 是指明瞭該腳本的運行程序,也可以不寫,只是執行的時候代碼有所變化,下文會闡述說明此問題
上述代碼就是我們的map程序,我們給它取名叫做mapper.py 也存儲到/usr/local/hadoop/目錄下,然後載寫出reduce程序,統計mapper函數傳過來的序列中各個單詞出現的次數,並保存在相同目錄下,程序如下:
#!/usr/bin/env python
from operator import itemgetter
import sys
current_word = None
current_count = 0
word = None
# input comes from STDIN
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# parse the input we got from mapper.py
word, count = line.split('\t', 1)
# convert count (currently a string) to int
try:
count = int(count)
except ValueError:
# count was not a number, so silently
# ignore/discard this line
continue
# this IF-switch only works because Hadoop sorts map output
# by key (here: word) before it is passed to the reducer
if current_word == word:
current_count += count
else:
if current_word:
# write result to STDOUT
print '%s\t%s' % (current_word, current_count)
current_count = count
current_word = word
# do not forget to output the last word if needed!
if current_word == word:
print '%s\t%s' % (current_word, current_count)
然後我們改變我們縮寫的程序的權限:(chmod ×××.py)
接下來我們先測試測試我們的程序有沒有錯: 我們echo出一小段文字,然後分別調用map和reduce程序來統計相關信息,代碼如下:
buptpwy@buptpwy-Lenovo:/usr/local/hadoop$ echo "hello hh hh lso"|/usr/local/hadoop/mapper.py |/usr/local/hadoop/reducer.py
hello 1
hh 2
lso 1
如果上面的腳本沒有指定運行環境的化我們可以這樣:
buptpwy@buptpwy-Lenovo:/usr/local/hadoop$ echo "hello hh hh lso"|python /usr/local/hadoop/mapper.py |python /usr/local/hadoop/reducer.py
hello 1
hh 2
lso 1
然後重頭戲來了,我們可以應該調用hadoop-streaming來執行我們的源程序了,首先,先將我們要處理的文件傳入HDFS系統中:
buptpwy@buptpwy-Lenovo:~/下載/hadoop/hadoop-0.20.2$ bin/hadoop dfs -put /usr/local/hadoop/book book
前面指定的是文件原本所在的目錄,後者指定的是在HDFS文件系統的目錄
然後我們就可以調用hadoop-streaming來執行map-reduce程序了,如下:
buptpwy@buptpwy-Lenovo:~/下載/hadoop/hadoop-0.20.2$ bin/hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar -mapper /usr/local/hadoop/mapper.py -reducer /usr/local/hadoop/reducer.py -input book/* -output book-outbin/hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar -mapper /usr/local/hadoop/mapper.py -reducer /usr/local/hadoop/reducer.py -input book/* -output book-out
-mapper指定map程序位置,-reducer指定reduce程序位置,-input 指定數據位置,-output指定輸出結果位置,在次代碼執行後,成功的話,會有以下顯示:
packageJobJar: [/home/buptpwy/hadoop_tmp/hadoop-unjar580737856059781750/] [] /tmp/streamjob3726475424298491575.jar tmpDir=null
15/05/09 20:17:10 INFO mapred.FileInputFormat: Total input paths to process : 1
15/05/09 20:17:10 INFO streaming.StreamJob: getLocalDirs(): [/usr/hadoop-0.20.2/maperd/local]
15/05/09 20:17:10 INFO streaming.StreamJob: Running job: job_201505091953_0004
15/05/09 20:17:10 INFO streaming.StreamJob: To kill this job, run:
15/05/09 20:17:10 INFO streaming.StreamJob: /home/buptpwy/下載/hadoop/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201505091953_0004
15/05/09 20:17:10 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201505091953_0004
15/05/09 20:17:11 INFO streaming.StreamJob: map 0% reduce 0%
15/05/09 20:17:19 INFO streaming.StreamJob: map 100% reduce 0%
15/05/09 20:17:31 INFO streaming.StreamJob: map 100% reduce 100%
15/05/09 20:17:34 INFO streaming.StreamJob: Job complete: job_201505091953_0004
15/05/09 20:17:34 INFO streaming.StreamJob: Output: book-out
然後我們就可以在hdfs文件系統中找到book-out文件,可以從中查看我們所需要的結果了。
總結
最後僅說說本人最切身的總結,hadoop-streaming提供了一種接口給我們,讓我們可以很方便的使用其他語言來寫hadoop的map-reduce程序,但是,其實實質上他還需轉化成java執行,速度比直接調用javaapi要慢,當然,這也是沒有辦法的事,至於其他的調用方法,等着以後學到在說吧。