對數據文件中的數據進行
一、輸入數據如下所示:
要求輸出如下所示:
二、創建相關文件
- 先在HDFS創建DateRemove文件夾
./bin/hdfs dfs -mkdir /user/hadoop/DateRemove
用ls查看文件是否創建成功
./bin/hdfs dfs -ls /user/hadoop
- 在HDFS下創建一個名稱爲DateRemove/input的目錄
./bin/hdfs dfs -mkdir /user/hadoop/DateRemove/input
- 創建file1.txt和file2.txt,並寫入數據
- 將file1.txt和file2.txt傳到hdfs的/user/hadoop/DateRemove/input文件夾中
./bin/hdfs dfs -put ./file1.txt /user/hadoop/DateRemove/input
./bin/hdfs dfs -ls /user/hadoop/DateRemove/input
在用ls查看:
上圖說明確實存在file1.txt和file2.txt。
三、JAVA代碼
①DedupMapper.java
package Data_De_duplication;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class DedupMapper extends Mapper<LongWritable, Text, Text, NullWritable> {
private static Text field = new Text();
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
field = value;
context.write(field, NullWritable.get());
}
}
②DedupReducer.java
package Data_De_duplication;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class DedupMapper extends Mapper<LongWritable, Text, Text, NullWritable> {
private static Text field = new Text();
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
field = value;
context.write(field, NullWritable.get());
}
}
③DedupRunner.java
package Data_De_duplication;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class DedupRunner {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
job.setJarByClass(DedupRunner.class);
job.setMapperClass(DedupMapper.class);
job.setReducerClass(DedupReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
FileInputFormat.setInputPaths(job, new Path("hdfs://localhost:9000/user/hadoop/DateRemove/input"));
// 指定處理完成之後的結果所保存的位置
FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:9000/user/hadoop/DateRemove/output"));
job.waitForCompletion(true);
}
}
點擊剛創建的DedupRunner.java,選擇Run As -> Run Configurations,設置運行時的相關參數如下
運行結果:
確實自動生成了output文件
用cat命令查看運行完成時的part-r-00000文件,結果如下
./bin/hdfs dfs -cat /user/hadoop/DateRemove/output/part-r-00000
對比發現,發現重複的數據已去重成功。