MapReduce編程實例(三)

前提準備:

1.hadoop安裝運行正常。Hadoop安裝配置請參考:Ubuntu下 Hadoop 1.2.1 配置安裝

2.集成開發環境正常。集成開發環境配置請參考 :Ubuntu 搭建Hadoop源碼閱讀環境


MapReduce編程實例:

MapReduce編程實例(一),詳細介紹在集成環境中運行第一個MapReduce程序 WordCount及代碼分析

MapReduce編程實例(二),計算學生平均成績

MapReduce編程實例(三),數據去重

MapReduce編程實例(四),排序

MapReduce編程實例(五),MapReduce實現單表關聯



輸入:

2013-11-01 aa
2013-11-02 bb
2013-11-03 cc
2013-11-04 aa
2013-11-05 dd
2013-11-06 dd
2013-11-07 aa
2013-11-09 cc
2013-11-10 ee


2013-11-01 bb 
2013-11-02 33 
2013-11-03 cc
2013-11-04 bb
2013-11-05 23 
2013-11-06 dd
2013-11-07 99 
2013-11-09 99
2013-11-10 ee

.....

.....

.....

數據重複,map中每一行做爲一個key,value值任意,經過shuffle之後輸入到reduce中利用key的唯一性直接輸出key

代碼太簡單,不解釋,上代碼:

package com.t.hadoop;

import java.io.IOException;
import java.util.HashSet;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

/**
 * 數據去重
 * @author daT [email protected]
 *
 */
public class Dedup {

	public static class MyMapper extends Mapper<Object, Text, Text, Text>{

		@Override
		protected void map(Object key, Text value, Context context)
				throws IOException, InterruptedException {
				context.write(value, new Text(""));
		}
	}
	
	public static class MyReducer extends Reducer<Text, Text, Text, Text>{

		@Override
		protected void reduce(Text key, Iterable<Text> value,
				Context context)
				throws IOException, InterruptedException {
			context.write(key, new Text(""));
		}
	}
	
	
	public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException{
		Configuration conf = new Configuration();
		String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
		
		if(otherArgs.length<2){
			System.out.println("parameter errors!");
			System.exit(2);
		}
		
		Job job = new org.apache.hadoop.mapreduce.Job(conf, "Dedup");
		job.setJarByClass(Dedup.class);
		job.setMapperClass(MyMapper.class);
		job.setCombinerClass(MyReducer.class);
		job.setReducerClass(MyReducer.class);
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(Text.class);
		
		FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
		FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
		
		System.exit(job.waitForCompletion(true)?0:1);
		
	}
	
}


輸出結果
2013-11-01 aa
2013-11-01 bb
2013-11-02 33
2013-11-02 bb
2013-11-03 cc
2013-11-03 cc
2013-11-04 98
2013-11-04 aa
2013-11-04 bb
2013-11-05 23
2013-11-05 93
2013-11-05 dd
2013-11-06 99
2013-11-06 dd
2013-11-07 92
2013-11-07 99
2013-11-07 aa
2013-11-09 99
2013-11-09 aa
2013-11-09 cc
2013-11-10 ee

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章