MapReduce對數據進行二次排序

今天做了個用MapReduce對數據進行二次排序,這裏的重點在於排序,所以要重寫WritableComparator的排序方法compare方法,這裏要注意一下的返回值如果返回-1就是升序排序,返回1就是降序排序。謹以此文章來記錄自己的學習之路及希望能幫助到有需要的朋友。
題目要求:
|1. 綜合設計題
假設有兩個文件代表兩個班級的成績,擴展名爲.csv,文件中有兩列數據,代表學號及數學成績。具體文件數據如下。
說明:班級信息請參考學號,如2017876211,前八位爲班級區分信息,後兩位爲學號信息。
程序設計要求如下:
要求輸出第一列爲學號,第二列爲數學成績。
要求輸出按數學成績降序排序,如果數學成績相同,按學號升序排序(二次排序)。
要求每個班級輸出一個文件。

數據如下:
第一份數據:
2017876101,39
2017876102,100
2017876103,49
2017876104,79
2017876105,84
2017876106,93
2017876107,24
2017876108,99
2017876109,52
2017876110,43
2017876111,13
2017876112,74
2017876113,88
2017876114,48
2017876115,58
2017876116,93
2017876117,43
2017876118,51
2017876119,91
2017876120,93
2017876121,21
2017876122,47
2017876123,16
2017876124,19
2017876125,93
2017876126,93
2017876127,20
2017876128,16
2017876129,21
2017876130,23
2017876131,87
2017876132,79
2017876133,49
2017876134,72
2017876135,93
2017876136,79
2017876137,87
2017876138,21
2017876139,15
2017876140,63
2017876141,28
2017876142,19
2017876143,86
2017876144,48
2017876145,65
2017876146,98
2017876147,88
2017876148,72
2017876149,14
2017876150,26
2017876151,72
2017876152,87
2017876153,99
2017876154,99
2017876155,100
2017876156,100
2017876157,35
2017876158,35
2017876159,72
2017876160,72
2017876161,63
2017876162,46
2017876163,55
2017876164,41
2017876165,84
2017876166,27
2017876167,51
2017876168,44
2017876169,82
2017876170,32
2017876171,36
2017876172,72
2017876173,92
2017876174,95
2017876175,87
2017876176,95
2017876177,27
2017876178,91
2017876179,56
2017876180,59| |
第二份數據:
2017876201,96
2017876202,84
2017876203,96
2017876204,26
2017876205,67
2017876206,96
2017876207,96
2017876208,55
2017876209,19
2017876210,54
2017876211,47
2017876212,96
2017876213,69
2017876214,90
2017876215,38
2017876216,31
2017876217,72
2017876218,46
2017876219,37
2017876220,14
2017876221,96
2017876222,62
2017876223,96
2017876224,77
2017876225,40
2017876226,48
2017876227,20
2017876228,32
2017876229,89
2017876230,98
2017876231,88
2017876232,64
2017876233,88
2017876234,80
2017876235,96
2017876236,88
2017876237,64
2017876238,100
2017876239,15
2017876240,93
2017876241,44
2017876242,48
2017876243,72
2017876244,83
2017876245,62
2017876246,60
2017876247,81
2017876248,44
2017876249,100
2017876250,88
2017876251,22
2017876252,22
2017876253,46
2017876254,25
2017876255,36
2017876256,100
2017876257,29
2017876258,79
2017876259,91
2017876260,73
2017876261,88
2017876262,60
2017876263,96
2017876264,32
2017876265,86
2017876266,88
2017876267,94
2017876268,96
2017876269,93
2017876270,34
2017876271,97
2017876272,33
2017876273,82
2017876274,33
2017876275,15
2017876276,58
2017876277,27
2017876278,60
2017876279,64
2017876280,88

  • 1先寫writable接口:
    分析:把學號和成績都寫進writable裏面,並且是學號和成績一起作爲key值,類型都是IntWritable類型。
package SequenceScoreAndSno;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.WritableComparable;
public class MyWritable implements WritableComparable<MyWritable>{
 private int sno;
 private int score;
 public int getSno() {
  return sno;
 }
 public void setSno(int sno) {
  this.sno = sno;
 }
 public int getScore() {
  return score;
 }
 @Override
 public String toString() {
  return "sno=" + sno + ", score=" + score ;
 }
 public void setScore(int score) {
  this.score = score;
 }
 public MyWritable(){
  
 }
 public MyWritable(int sno,int score){
  set(sno,score);
 }
 private void set(int sno, int score) {
  this.sno = sno;
  this.score = score;
  
 }
 @Override
 public void readFields(DataInput in) throws IOException {
  sno = in.readInt();
  score = in.readInt();
  
 }
 @Override
 public void write(DataOutput out) throws IOException {
  out.writeInt(sno);
  out.writeInt(score);
  
 }
 public int compareTo(MyWritable o) {
  // TODO Auto-generated method stub
  return 0;
 }
}
  • 2.分區代碼:
    題目要求要輸出兩個文件,每一個班一個文件,所以我們就把按照學號來分班和分區,20178671爲一班,20178672的爲二班,分區代碼如下:
package SequenceScoreAndSno;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Partitioner;
public class MyPartitioner extends Partitioner<MyWritable,NullWritable>{
 @Override
 public int getPartition(MyWritable mw, NullWritable arg1, int arg2) 
 {
  String st = Integer.toString(mw.getSno());
  if(st.contains("20178761"))
      return 0;
  else
   return 1;
 }
}

3.排序代碼;
因爲成績涉及到倒序排序,而MapReduce默認的排序算法是升序的,所以要重寫排序算法:

package SequenceScoreAndSno;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
public class MyComparator extends WritableComparator {
	protected MyComparator()
	{
		super(MyWritable.class,true);
	}
	@Override
	public int compare(WritableComparable w1,WritableComparable w2)
	{
		MyWritable mw1 = (MyWritable) w1;
		MyWritable mw2 = (MyWritable) w2;
		if(mw1.getScore()>mw2.getScore())//如果第一個成績比第二個大
			return -1;//採用倒序排序
		else if(mw1.getScore()<mw2.getScore())//如果第一個成績比第二個小
			return 1;//採用升序排序
		else//如果成績相等
		{
			if(mw1.getSno()>mw2.getSno())//就判斷學號,學號按升序排序
				return 1;
			else
				return -1;
		}
	}
}

  • 4.分組代碼:
    MapReduce的默認分組是以相同key值的爲一組,由於我們的key值是學號+成績,通過觀察學號是唯一的,所以默認會每一條數據調用一個reducer任務,我們把數據按成績分組,把相同成績的放一組,分組代碼如下:
package SequenceScoreAndSno;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
public class MyGroupSort extends WritableComparator {
	public MyGroupSort()
	{
		super(MyWritable.class,true);
	}
	@SuppressWarnings("rawtypes")
	@Override
	public int compare(WritableComparable a,WritableComparable b)
	{
		MyWritable mw1 = (MyWritable) a;
		MyWritable mw2 = (MyWritable) b;
		if(mw1.getScore() == mw2.getScore())//相同成績的分到一組
			return 0;
		else
			return 1;
	}
}

  • 5Mapper類代碼:
    把成績和學號讀進來:
package SequenceScoreAndSno;
import java.io.IOException;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class MyMapper extends Mapper<Object,Text,MyWritable, NullWritable> {
	private MyWritable out = new MyWritable();
	@Override
	public void map(Object key, Text value, Context context) throws IOException,InterruptedException
	{
		String[] strs = value.toString().split(",");
		int sno = Integer.parseInt(strs[0]);
		int score = Integer.parseInt(strs[1]);
		out.setSno(sno);
		out.setScore(score);
		System.out.println(sno+"\t"+score);
		context.write(out, NullWritable.get());
	}
}

  • 6.Reducer代碼:
package SequenceScoreAndSno;
import java.io.IOException;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Reducer;
public class MyReducer extends Reducer<MyWritable, NullWritable,MyWritable, NullWritable>
{
	@Override
	public void reduce(MyWritable key,Iterable<NullWritable> value,Context context) throws IOException,InterruptedException
	{
		for(NullWritable val : value)
		{
			System.out.println(key.getSno()+"\t"+key.getScore());
			context.write(key, NullWritable.get());
		}
	}
}

  • 7.Job代碼:
package SequenceScoreAndSno;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class MyJob {
	public static void main(String[] args) throws Exception
	{
		String inputPath = "hdfs://slave:9000/inputdatas";
		String outputPath = "hdfs://slave:9000/sortoutput";
		args = new String[] {inputPath,outputPath};

		Configuration conf = new Configuration();
		Job job = Job.getInstance(conf);
		job.setJarByClass(MyJob.class);
		
		job.setMapperClass(MyMapper.class);
		job.setReducerClass(MyReducer.class);
		
		job.setPartitionerClass(MyPartitioner.class);
		job.setSortComparatorClass(MyComparator.class);
		job.setGroupingComparatorClass(MyGroupSort.class);
	   
		job.setNumReduceTasks(2);
		
		job.setOutputKeyClass(MyWritable.class);
		job.setOutputValueClass(NullWritable.class);
		
		FileInputFormat.addInputPath(job, new Path(args[0]));
		FileOutputFormat.setOutputPath(job, new Path(args[1]));
		
		job.waitForCompletion(true);
	}
}

輸出結果:
第一個輸出結果
在這裏插入圖片描述
第二個輸出結果
在這裏插入圖片描述
結果正確!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章