hadoop的mapreducer處理數據(Python)

1.hadoop客戶端環境

1.直接找有hadoop服務的機器,這樣你訪問的就是本機的hadoop集羣,也就不用在配置了

2.如果你要遠程其他hadoop集羣,那麼你需要配置相關文件,配置方式如同配置hadoop集羣一樣

hadoop集羣搭建詳見:https://blog.csdn.net/xzpdxz/article/details/86692631 修改相應的配置

注意確保你的環境有java

2.mapreducer

mapper:可以理解爲數據分片計算

reducer:可以理解爲將分片進行合計算

最常見的就是計算詞頻

a) 準備長篇英語文章,input.txt

上傳input.txt到hdfs

[hadoop_test@hserver1 hadoop_test] # hadoop dfs -put input.txt /user/hadoop_test/xxxx/input.txt

b) mapper.py函數

實現的功能爲將單詞按空格分開,也就是單詞

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import sys

for line in sys.stdin:

    line = line.strip()
    words = line.split()
    for word in words:
        print '%s\t%s' %(word, 1)

c) reducer.py函數

將mapper的單詞用來計算詞頻

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import sys

current_word = None
current_count = 0
word = None

for line in sys.stdin:
    words = line.strip()
    word, count = words.split('\t')

    try:
        count = int(count)
    except ValueError:
        continue

    if current_word == word:
        current_count += count
    else:
        if current_word:
            print '%s\t%s' %(current_word, current_count)
        current_count = count
        current_word = word

if current_word == word:
    print '%s\t%s' %(current_word, current_count)

d) run.sh運行函數

在hadoop上運行mapreducer

#! /bin/bash

HADOOP=hadoop       // hadoop命令,如果你的環境沒有hadoop則必須使用hadoop全路徑
STREAM=~/hadoop-2.9.2/share/hadoop/tools/lib/hadoop-streaming-2.9.2.jar  // 環境的stream
task_name="lijiacai"    // 任務名稱
mapper_num=2            // mapper 任務數
reducer_num=2            // reducer 任務數
priority=HIGH            // 優先級
capacity_mapper=5000    // mapper最大數
capacity_reducer=1000    // reducer最大數

mapper_file=./mapper.py        // mapper文件,可以取別的名字,由於我上面用的mapper
reducer_file=./reducer.py        // reducer文件

input_path=/user/hadoop_test/xxxx/input.txt    // hadoop上的數據輸入
output_path=/user/hadoop_test/xxx/output        // reducer之後的數據輸出

name="hadoop_test"        // hadoop用戶名
passwd="123456"            // hadoop用戶密碼

$HADOOP fs -rm -r $output_path   // 每次運行前先刪除之前的output目錄,不然無法在寫入該路徑

$HADOOP jar $STREAM \
        -D mapred.job.name="$task_name" \   // 
        -D mapred.job.priority=$priority \
        -D mapred.map.tasks=$mapper_num \
        -D mapred.reducer.tasks=$reducer_num \
        -D mapred.job.map.capacity=$capacity_mapper \
        -D mapred.job.reduce.capacity=$capacity_mapper \
        -D hadoop.job.ugi="${name},${passwd}" \
        -input ${input_path} \
        -output ${output_path} \
        -mapper $mapper_file \
        -reducer $reducer_file \
        -file $mapper_file \
        -file $reducer_file

運行得到如下結果:

[hadoop_test@hserver1 hadoop_test] # sh run.sh  
Deleted /user/hadoop_test/lijiacai/output
19/03/20 17:21:05 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
packageJobJar: [./mapper.py, ./reducer.py, /tmp/hadoop-unjar3119144331866952236/] [] /tmp/streamjob6424092896909659797.jar tmpDir=null
19/03/20 17:21:06 INFO client.RMProxy: Connecting to ResourceManager at hserver1/10.58.107.38:8032
19/03/20 17:21:06 INFO client.RMProxy: Connecting to ResourceManager at hserver1/10.58.107.38:8032
19/03/20 17:21:07 INFO mapred.FileInputFormat: Total input files to process : 1
19/03/20 17:21:07 INFO mapreduce.JobSubmitter: number of splits:2
19/03/20 17:21:07 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
19/03/20 17:21:07 INFO Configuration.deprecation: mapred.job.priority is deprecated. Instead, use mapreduce.job.priority
19/03/20 17:21:07 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
19/03/20 17:21:07 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
19/03/20 17:21:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1548924494331_0031
19/03/20 17:21:07 INFO impl.YarnClientImpl: Submitted application application_1548924494331_0031
19/03/20 17:21:07 INFO mapreduce.Job: The url to track the job: http://hserver1:8088/proxy/application_1548924494331_0031/
19/03/20 17:21:07 INFO mapreduce.Job: Running job: job_1548924494331_0031
19/03/20 17:21:15 INFO mapreduce.Job: Job job_1548924494331_0031 running in uber mode : false
19/03/20 17:21:15 INFO mapreduce.Job:  map 0% reduce 0%
19/03/20 17:21:22 INFO mapreduce.Job:  map 100% reduce 0%
19/03/20 17:21:27 INFO mapreduce.Job:  map 100% reduce 100%
19/03/20 17:21:28 INFO mapreduce.Job: Job job_1548924494331_0031 completed successfully

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章