以经典数词程序为例:
当mapper接收到一行value时,
package com.datang.mr;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class WordCountMapper extends Mapper<LongWritable,Text,Text,IntWritable>{
protected void map(LongWritable key, Text value,Context context)
throws IOException, InterruptedException {
String [] str = value.toString().split(" ");
//接收到一行value,分成一个个词
for(int i = 0;i<str.length;i++){
context.write(new Text(str[i]), new IntWritable(1));
//每个词都对应一个键值对 <key:xxx, value:1>
}
}
}
reducer获得经过shulffer后的数据
package com.datang.mr;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
// a 1
// a 1
//b 1
//b 1
//b 1
public class WordCountReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
@Override
protected void reduce(Text arg0, Iterable<IntWritable> arg1,Context arg2)
throws IOException, InterruptedException {
int sum = 0;
for(IntWritable i :arg1){
sum = sum+i.get();
}
//统计出此迭代器的长度,就是此单词(arg0)的总个数
arg2.write(arg0, new IntWritable(sum));
}
}
- 按行分割 得到一个个key(行号) value(行内容),分配给一个个mapper
- mapper将输入的键值对处理成另一种键值对,key(单词) value(1),意为在本行中此单词出现了一次
- shouffer将mapper们的键值对归档,得到一组组key(单词)相同的键值对
- reducer得到经shouffer的键值对组,组名为key(单词)以及一个含有对应值的迭代器,迭代器的长度就是此单词的长度,将其写入上下文congtext
图示:
新学大数据,以上只是个人理解,愿读者勘误