簡介
上一篇我分享了關於自定義輸入的文章,下面我再來看這樣一個問題。
這是原始數據
現在我們想通過得到,兩個文件,一個文件裏面是bigdata的news,另一個文件時其他的news。
通過自定義輸出就可以做到。
代碼部分
首先目錄結構:
mapperClass
package costomOutputFormat;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
/**
* @Author: Braylon
* @Date: 2020/1/29 11:48
* @Version: 1.0
*/
public class mapperClass extends Mapper<LongWritable, Text, Text, NullWritable> {
Text k = new Text();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String s = value.toString();
k.set(s);
context.write(k, NullWritable.get());
}
}
這裏很簡單的邏輯,只是把每一行的內容當作key然後輸送到下一個階段。
reduceClass
package costomOutputFormat;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
/**
* @Author: Braylon
* @Date: 2020/1/29 11:52
* @Version: 1.0
*/
public class reducerClass extends Reducer<Text, NullWritable, Text, NullWritable> {
@Override
protected void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
String s = key.toString();
s = s + "\r\n";
context.write(new Text(s), NullWritable.get());
}
}
這裏增加了一個分行操作。其他的同樣沒有什麼好說的。都很簡單到目前爲止。
costomOutputFormat:
如果看過我上一篇分享的就會有印象,我們通過繼承FileinputFormat類來實現了自定義輸入,這裏同理,
package costomOutputFormat;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.RecordWriter;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
/**
* @Author: Braylon
* @Date: 2020/1/29 11:54
* @Version: 1.0
*/
public class costomOutputFormat extends FileOutputFormat<Text, NullWritable> {
@Override
public RecordWriter<Text, NullWritable> getRecordWriter(TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
return new costomRecordWriter(taskAttemptContext);
}
}
然後發現終於找到了關鍵,就是costomRecordWriter,這是我們自己自定義的類,而繼承了RecordWriter類,看到沒有,終於找到了它。就像自定義輸入時的recordReader一樣,我們主要的邏輯都將在這個類中實現。
costomRecordWriter:
package costomOutputFormat;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.RecordWriter;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import java.io.IOException;
/**
* @Author: Braylon
* @Date: 2020/1/29 11:56
* @Version: 1.0
*/
public class costomRecordWriter extends RecordWriter<Text, NullWritable> {
FSDataOutputStream bigdata = null;
FSDataOutputStream other = null;
public costomRecordWriter(TaskAttemptContext context) {
FileSystem fs;
try {
fs = FileSystem.get(context.getConfiguration());
Path path1 = new Path("D:\\idea\\HDFS\\src\\main\\java\\costomOutputFormat\\data\\out1");
Path path2 = new Path("D:\\idea\\HDFS\\src\\main\\java\\costomOutputFormat\\data\\out2");
bigdata = fs.create(path1);
other = fs.create(path2);
} catch (IOException e) {
e.printStackTrace();
}
}
@Override
public void write(Text text, NullWritable nullWritable) throws IOException, InterruptedException {
//判斷是否包含目標字符
if (text.toString().contains("bigdata")) {
bigdata.write(text.toString().getBytes());
} else {
other.write(text.toString().getBytes());
}
}
@Override
public void close(TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
if (bigdata != null) {
bigdata.close();
}
if (other != null) {
other.close();
}
}
}
知識點
-
構造函數傳遞context,用於配置環境變量。
-
重寫write方法,判斷是否包含目標字符,分別寫入不同的文件。
-
注意close方法,這裏由於原來我沒有分享過這種形式,一般我們都會寫在fis.write的後面,所以有的人對這樣寫有些陌生,作用就是關閉輸入流。邏輯還是很好理解的。
driver類:
package costomOutputFormat;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
/**
* @Author: Braylon
* @Date: 2020/1/29 12:06
* @Version: 1.0
*/
public class driver {
public static void main(String[] args) throws IOException {
args = new String[]{"D:\\idea\\HDFS\\src\\main\\java\\costomOutputFormat\\data\\1.txt", "D:\\idea\\HDFS\\src\\main\\java\\costomOutputFormat\\out"};
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
job.setJarByClass(driver.class);
job.setMapperClass(mapperClass.class);
job.setReducerClass(reducerClass.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(NullWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
job.setOutputFormatClass(costomOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
/*雖然定義了outputformat,但是由於Costom類繼承於FileOutputFormat
* 所以仍然要輸出一個success文件,由此,指定輸出目錄*/
try {
job.waitForCompletion(true);
System.out.println("done");
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
}
這裏沒什麼新東西了。
大家注意身體,也別給國家添亂。
武漢加油
大家共勉~~