<span style="font-family:SimSun;font-size:18px;">import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class WordCount { </span>
<span style="font-family:SimSun;font-size:18px;">
<span style="white-space:pre"> </span>public static class WordCountMap extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class WordCountReduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
} </span>
程序分析
1、WordCountMap類繼承了org.apache.hadoop.mapreduce.Mapper,4個泛型類型分別是map函數輸入key的類型,輸入value的類型,輸出key的類型,輸出value的類型。
2、WordCountReduce類繼承了org.apache.hadoop.mapreduce.Reducer,4個泛型類型含義與map類相同。
3、map的輸出類型與reduce的輸入類型相同,而一般情況下,map的輸出類型與reduce的輸出類型相同,因此,reduce的輸入類型與輸出類型相同。
4、hadoop根據以下代碼確定輸入內容的格式:
job.setInputFormatClass(TextInputFormat.class);
TextInputFormat是hadoop默認的輸入方法,它繼承自FileInputFormat。在TextInputFormat中,它將數據集切割成小數據集InputSplit,每一個InputSplit由一個mapper處理。此外,InputFormat還提供了一個RecordReader的實現,將一個InputSplit解析成<key,value>的形式,並提供給map函數:
key:這個數據相對於數據分片中的字節偏移量,數據類型是LongWritable。
value:每行數據的內容,類型是Text。
因此,在本例中,map函數的key/value類型是LongWritable與Text。
5、Hadoop根據以下代碼確定輸出內容的格式:
job.setOutputFormatClass(TextOutputFormat.class);
TextOutputFormat是hadoop默認的輸出格式,它會將每條記錄一行的形式存入文本文件,如
the 30
happy 23
……