[hadoop2.7.1]I/O之MapFile(排過序的SequenceFile)讀、寫、重建index實例

 MapFile


是排序後的SequenceFile,MapFile由兩部分組成,分別是data和index。


index


文件的數據索引,主要記錄了每個Record的key值,以及該Record在文件中的偏移位置。在MapFile被訪問的時候,索引文件會被加載到內存,通過索引映射關係可迅速定位到指定Record所在文件位置,因此,相對SequenceFile而言,MapFile的檢索效率是高效的,缺點是會消耗一部分內存來存儲index數據,因爲讀取的時候,一整個的index都會被讀取到內存中,所以在實現key的時候,要使數據儘可能的小。


讀、寫源碼:


package org.apache.hadoop.io;

import java.io.IOException;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.MapFile;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.util.ReflectionUtils;

public class THT_testMapFileWrite1 {

	private static final String[] DATA = { "One, two, buckle my shoe",
			"Three, four, shut the door", "Five, six, pick up sticks",
			"Seven, eight, lay them straight", "Nine, ten, a big fat hen" };

	public static void main(String[] args) throws IOException {
		// String uri = args[0];
		String uri = "file:///D://tmp//map1";
		Configuration conf = new Configuration();
		FileSystem fs = FileSystem.get(URI.create(uri), conf);

		IntWritable key = new IntWritable();
		Text value = new Text();
		MapFile.Writer writer = null;
		try {
			writer = new MapFile.Writer(conf, fs, uri, key.getClass(),
					value.getClass());

			for (int i = 0; i < 10; i++) {
				key.set(i + 1);
				value.set(DATA[i % DATA.length]);
				writer.append(key, value);
			}
		} finally {
			IOUtils.closeStream(writer);
		}

		MapFile.Reader reader = null;
		reader = new MapFile.Reader(fs, uri, conf);
		Writable keyR = (Writable) ReflectionUtils.newInstance(
				reader.getKeyClass(), conf);
		Writable valueR = (Writable) ReflectionUtils.newInstance(
				reader.getValueClass(), conf);
		while (reader.next(key, value)) {
			System.out.printf("%s\t%s\n", key, value);
		}

	}
}

運行結果:


2015-11-08 11:46:09,532 INFO  compress.CodecPool (CodecPool.java:getDecompressor(181)) - Got brand-new decompressor [.deflate]
1	One, two, buckle my shoe
2	Three, four, shut the door
3	Five, six, pick up sticks
4	Seven, eight, lay them straight
5	Nine, ten, a big fat hen
6	One, two, buckle my shoe
7	Three, four, shut the door
8	Five, six, pick up sticks
9	Seven, eight, lay them straight
10	Nine, ten, a big fat hen

生成一個文件夾,文件夾中有兩個文件index、data:



index內容:




data內容:




重建索引(index)


這裏首先將剛纔生成的index文件刪除掉,上源碼:


package org.apache.hadoop.io;

//cc MapFileFixer Re-creates the index for a MapFile
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.MapFile;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.SequenceFile.Reader;
import org.apache.hadoop.util.ReflectionUtils;

//vv MapFileFixer
public class THT_testMapFileFix {

	public static void main(String[] args) throws Exception {
		// String mapUri = args[0];
		String mapUri = "file:///D://tmp//map1";

		Configuration conf = new Configuration();

		FileSystem fs = FileSystem.get(URI.create(mapUri), conf);
		Path map = new Path(mapUri);
		Path mapData = new Path(map, MapFile.DATA_FILE_NAME);

		// Get key and value types from data sequence file
		SequenceFile.Reader reader = new SequenceFile.Reader(fs, mapData, conf);
		Class keyClass = reader.getKeyClass();
		Class valueClass = reader.getValueClass();
		reader.close();

		// Create the map file index file
		long entries = MapFile.fix(fs, map, keyClass, valueClass, false, conf);
		System.out.printf("Created MapFile %s with %d entries\n", map, entries);

		Path path = new Path(mapUri+"//data");
		SequenceFile.Reader.Option option1 = Reader.file(path);

		SequenceFile.Reader reader1 = null;
		try {
			reader1 = new SequenceFile.Reader(conf, option1);
			Writable key = (Writable) ReflectionUtils.newInstance(
					reader1.getKeyClass(), conf);
			Writable value = (Writable) ReflectionUtils.newInstance(
					reader1.getValueClass(), conf);
			long position = reader1.getPosition();
			while (reader1.next(key, value)) {
				String syncSeen = reader1.syncSeen() ? "*" : "";
				System.out.printf("[%s%s]\t%s\t%s\n", position, syncSeen, key,
						value);
				position = reader1.getPosition(); // beginning of next record
			}
		} finally {
			IOUtils.closeStream(reader1);
		}
	}

}
// ^^ MapFileFixer


運行結果如下:


2015-11-08 12:16:34,015 INFO  compress.CodecPool (CodecPool.java:getCompressor(153)) - Got brand-new compressor [.deflate]
Created MapFile file:/D:/tmp/map1 with 10 entries
[128]	1	One, two, buckle my shoe
[173]	2	Three, four, shut the door
[220]	3	Five, six, pick up sticks
[264]	4	Seven, eight, lay them straight
[314]	5	Nine, ten, a big fat hen
[359]	6	One, two, buckle my shoe
[404]	7	Three, four, shut the door
[451]	8	Five, six, pick up sticks
[495]	9	Seven, eight, lay them straight
[545]	10	Nine, ten, a big fat hen







發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章