一、OutputFormat接口實現類
OutputFormat是MapReduce輸出的基類,所有實現MapReduce輸出都實現了OutputFormat接口。
-
文本輸出
TextOutputFormat
默認的輸出格式是
TextOutputFormat
,它把每條記錄寫爲文本行。它的鍵和值可以是任意類型,因爲TextOutputFormat
調用toString()
方法把它們轉換爲字符串。 -
SequenceFileOutputFormat
將
SequenceFileOutputFormat
輸出作爲後續MapReduce
任務的輸入,這便是一種好的輸出格式,因爲它的格式緊湊,很容易被壓縮。 -
自定義
OutputFormat
根據用戶需求,自定義實現輸出。
二、自定義OutputFormat
-
使用場景
爲了實現控制最終文件的輸出路徑和輸出格式,可以自定義OutputFormat。
例如:要在一個
MapReduce
程序中根據數據的不同輸出兩類結果到不同目錄,這類靈活的輸出需求可以通過自定義OutputFormat
來實現。 -
自定義
OutputFormat
步驟:(1)自定義一個類繼承
FileOutputFormat
。(2)改寫
RecordWriter
,具體改寫輸出數據的方法write()
。
三、自定義OutputFormat案例
-
需求
過濾輸入的
log
日誌,包含easysir
的網站輸出到easysir.log
,不包含easysir
的網站輸出到other.log
。 -
輸入數據
http://www.baidu.com http://www.google.com http://cn.bing.com http://www.easysir.com http://www.sohu.com http://www.sina.com http://www.sin2a.com http://www.sin2desa.com http://www.sindsafa.com
-
創建包名:
com.easysir.outputformat
-
創建
FilterMapper
類:package com.easysir.outputformat; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import java.io.IOException; public class FilterMapper extends Mapper<LongWritable, Text, Text, NullWritable>{ @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { // 寫出 context.write(value, NullWritable.get()); } }
-
創建
FilterReducer
類:package com.easysir.outputformat; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException; public class FilterReducer extends Reducer<Text, NullWritable, Text, NullWritable> { Text k = new Text(); @Override protected void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException { String line = key.toString(); // 使輸出的結果有換行,更加清晰 line = line + "\r\n"; k.set(line); // 防止有重複數據 for (NullWritable nullWritable : values) { context.write(k, NullWritable.get()); } } }
-
創建
OutputFormat
類:package com.easysir.outputformat; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.RecordWriter; import org.apache.hadoop.mapreduce.TaskAttemptContext; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class OutputFormat extends FileOutputFormat<Text, NullWritable> { @Override public RecordWriter<Text, NullWritable> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException { return new FRecordWriter(job); } }
-
創建
FRecordWriter
類:package com.easysir.outputformat; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.RecordWriter; import org.apache.hadoop.mapreduce.TaskAttemptContext; import java.io.IOException; public class FRecordWriter extends RecordWriter<Text, NullWritable> { FSDataOutputStream fos_easysir; FSDataOutputStream fos_other; public FRecordWriter(TaskAttemptContext job) { try { // 1 獲取文件系統 FileSystem fs = FileSystem.get(job.getConfiguration()); // 2 創建輸出到easysir.log的輸出流 fos_easysir = fs.create(new Path("E:\\idea-workspace\\mrWordCount\\output\\easysir.log")); // 3 創建輸出到other.log的輸出流 fos_other = fs.create(new Path("E:\\idea-workspace\\mrWordCount\\output\\other.log")); } catch (IOException e) { e.printStackTrace(); } } @Override public void write(Text key, NullWritable value) throws IOException, InterruptedException { // 判斷key中是否又easysir if (key.toString().contains("easysir")){ fos_easysir.write(key.toString().getBytes()); }else { fos_other.write(key.toString().getBytes()); } } @Override public void close(TaskAttemptContext context) throws IOException, InterruptedException { IOUtils.closeStream(fos_easysir); IOUtils.closeStream(fos_other); } }
-
創建
FilterDriver
類:package com.easysir.outputformat; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class FilterDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { // 輸入輸出路徑需要根據自己電腦上實際的輸入輸出路徑設置 args = new String[] { "E:\\idea-workspace\\mrWordCount\\input\\output_data.txt", "E:\\idea-workspace\\mrWordCount\\output1" }; // 1 獲取配置信息 Configuration conf = new Configuration(); Job job = Job.getInstance(conf); // 2 配置本地jar包所在路徑 job.setJarByClass(FilterDriver.class); // 3 配置mapper和reducer路徑 job.setMapperClass(FilterMapper.class); job.setReducerClass(FilterReducer.class); // 4 配置map階段輸出kv值類型 job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(NullWritable.class); // 5 配置最終輸出kv值類型 job.setOutputKeyClass(Text.class); job.setOutputValueClass(NullWritable.class); // 要將自定義的輸出格式組件設置到job中 job.setOutputFormatClass(OutputFormat.class); // 6 配置輸入輸出路徑 FileInputFormat.setInputPaths(job, new Path(args[0])); // 雖然我們自定義了outputformat,但是因爲我們的outputformat繼承自fileoutputformat // 而fileoutputformat要輸出一個_SUCCESS文件,所以,在這還得指定一個輸出目錄 FileOutputFormat.setOutputPath(job, new Path(args[1])); // 7 提交job boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } }