0x00 文章內容
- 通過SequenceFile合併小文件
- 檢驗結果
說明:Hadoop集羣中,元數據是交由NameNode來管理的,每個小文件就是一個split,會有自己相對應的元數據,如果小文件很多,則會對內存以及NameNode很大的壓力,所以可以通過合併小文件的方式來進行優化。合併小文件其實可以有兩種方式:一種是通過Sequence格式轉換文件來合併,另一種是通過CombineFileInputFormat來實現。
此處選擇SequeceFile類型是因爲此格式爲二進制格式,而且是key-value類型,我們在合併小文件的時候,可以利用此特性,將每個小文件的名稱做爲key,將每個小文件裏面的內容做爲value。
0x01 通過SequenceFile合併小文件
1. 準備工作
a. 我的HDFS上有四個文件:
[hadoop-sny@master ~]$ hadoop fs -ls /files/
Found 4 items
-rw-r--r-- 1 hadoop-sny supergroup 39 2019-04-18 21:20 /files/put.txt
-rw-r--r-- 1 hadoop-sny supergroup 50 2019-12-30 17:12 /files/small1.txt
-rw-r--r-- 1 hadoop-sny supergroup 31 2019-12-30 17:10 /files/small2.txt
-rw-r--r-- 1 hadoop-sny supergroup 49 2019-12-30 17:11 /files/small3.txt
內容對應如下,其實內容可以隨意:
shao nai yi
nai nai yi yi
shao nai nai
hello hi hi hadoop
spark kafka shao
nai yi nai yi
hello 1
hi 1
shao 3
nai 1
yi 3
guangdong 300
hebei 200
beijing 198
tianjing 209
b. 除了在Linux上創建然後上傳外,還可以直接以流的方式輸入進去,如small1.txt
:
hadoop fs -put - /files/small1.txt
輸入完後,按ctrl
+ D
結束輸入。
2. 完整代碼
a. SmallFilesToSequenceFileConverter
完整代碼
package com.shaonaiyi.hadoop.filetype.smallfiles;
import com.shaonaiyi.hadoop.utils.FileUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
import java.io.IOException;
/**
* @Author [email protected]
* @Date 2019/12/30 16:29
* @Description 通過SequenceFile合併小文件
*/
public class SmallFilesToSequenceFileConverter {
static class SequenceFileMapper extends Mapper<NullWritable, BytesWritable, Text, BytesWritable> {
private Text fileNameKey;
@Override
protected void setup(Context context) {
InputSplit split = context.getInputSplit();
Path path = ((FileSplit) split).getPath();
fileNameKey = new Text(path.toString());
}
@Override
protected void map(NullWritable key, BytesWritable value, Context context) throws IOException, InterruptedException {
context.write(fileNameKey, value);
}
}
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Job job = Job.getInstance(new Configuration(), "SmallFilesToSequenceFileConverter");
job.setJarByClass(SmallFilesToSequenceFileConverter.class);
job.setInputFormatClass(WholeFileInputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(BytesWritable.class);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
job.setMapperClass(SequenceFileMapper.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
String outputPath = args[1];
FileUtils.deleteFileIfExists(outputPath);
FileOutputFormat.setOutputPath(job, new Path(outputPath));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
b. WholeFileInputFormat
完整代碼
package com.shaonaiyi.hadoop.filetype.smallfiles;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import java.io.IOException;
/**
* @Author [email protected]
* @Date 2019/12/30 16:34
* @Description 實現WholeFileInputFormat類
*/
public class WholeFileInputFormat extends FileInputFormat<NullWritable, BytesWritable> {
@Override
protected boolean isSplitable(JobContext context, Path filename) {
return false;
}
@Override
public RecordReader<NullWritable, BytesWritable> createRecordReader(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
WholeFileRecordReader reader = new WholeFileRecordReader();
reader.initialize(inputSplit, taskAttemptContext);
return reader;
}
}
c. WholeFileRecordReader
完整代碼
package com.shaonaiyi.hadoop.filetype.smallfiles;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import java.io.IOException;
/**
* @Author [email protected]
* @Date 2019/12/30 16:35
* @Description 實現WholeFileRecordReader類
*/
public class WholeFileRecordReader extends RecordReader<NullWritable, BytesWritable> {
private FileSplit fileSplit;
private Configuration configuration;
private BytesWritable value = new BytesWritable();
private boolean processed = false;
@Override
public void initialize(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
this.fileSplit = (FileSplit)inputSplit;
this.configuration = taskAttemptContext.getConfiguration();
}
@Override
public boolean nextKeyValue() throws IOException, InterruptedException {
if (!processed) {
byte[] contents = new byte[(int)fileSplit.getLength()];
Path file = fileSplit.getPath();
FileSystem fs = file.getFileSystem(configuration);
FSDataInputStream in = null;
try {
in = fs.open(file);
IOUtils.readFully(in, contents, 0, contents.length);
value.set(contents, 0, contents.length);
} finally {
IOUtils.closeStream(in);
}
processed = true;
return true;
}
return false;
}
@Override
public NullWritable getCurrentKey() throws IOException, InterruptedException {
return NullWritable.get();
}
@Override
public BytesWritable getCurrentValue() throws IOException, InterruptedException {
return value;
}
@Override
public float getProgress() throws IOException, InterruptedException {
return processed ? 1.0f : 0.0f;
}
@Override
public void close() throws IOException {
}
}
0x02 檢驗結果
1. 啓動HDFS和YARN
start-dfs.sh
start-yarn.sh
2. 執行作業
a. 打包並上傳到master上執行,需要傳入兩個參數
yarn jar ~/jar/hadoop-learning-1.0.jar com.shaonaiyi.hadoop.filetype.smallfiles.SmallFilesToSequenceFileConverter /files /output
3. 查看執行結果
a. 生成了一份文件
b. 查看到裏面的內容如下,但內容很難看
c. 用text查看文件內容,可看到key爲文件名,value爲二進制的裏面的內容。
0xFF 總結
- Input的路徑有4個文件,默認會啓動4個mapTask,其實我們可以通過
CombineTextInputFormat
設置成只啓動一個:
job.setInputFormatClass(CombineTextInputFormat.class);
具體操作請參考教程:通過CombineTextInputFormat實現合併小文件(調優技能)
作者簡介:邵奈一
全棧工程師、市場洞察者、專欄編輯
| 公衆號 | 微信 | 微博 | CSDN | 簡書 |
福利:
邵奈一的技術博客導航
邵奈一 原創不易,如轉載請標明出處。