Hadoop中的FileStatus、BlockLocation、LocatedBlocks、InputSplit

1 FileStatus

1.1 包名
org.apache.hadoop.fs.FileStatus

1.2 格式

FileStatus{path=hdfs://192.X.X.X:9000/hadoop-2.7.1.tar.gz; isDirectory=false; length=210606807; replication=3; blocksize=134217728; modification_time=xxx; access_time=xxx; owner=xxx; group=supergroup; permission=rw-r--r--; isSymlink=false}


2 BlockLocation

2.1 包名
org.apache.hadoop.fs.BlockLocation

2.2 調用處
JobClient的writeNewSplits方法,其中調用了List<InputSplit> splits = input.getSplits(job)方法,在getSplits方法中調用了getFileLocation()。

2.3 格式

public static void getFileLocation() throws Exception {
    Configuration conf = new Configuration();
	Path fpath = new Path(AConstants.hdfsPath + "hadoop-2.7.1.tar.gz");
	FileSystem hdfs = fpath.getFileSystem(conf);
	FileStatus filestatus = hdfs.getFileStatus(fpath);
	BlockLocation[] blkLocations = hdfs.getFileBlockLocations(filestatus,0, filestatus.getLen());
	System.out.println("total block num:" + blkLocations.length);
	for (int i = 0; i < blkLocations.length; i++) {
		System.out.println(blkLocations[i].toString());
		System.out.println("文件在block中的偏移量" + blkLocations[i].getOffset()
                + ", 長度" + blkLocations[i].getLength());
	}
}
total block num:2
0,134217728,192.X.X.X
文件在block中的偏移量0, 長度134217728
134217728,76389079,192.X.X.X
文件在block中的偏移量134217728, 長度76389079


splits數組信息

int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);
splits.add(makeSplit(path, length-bytesRemaining,
splitSize,blkLocations[blkIndex].getHosts(),blkLocations[blkIndex].getCachedHosts()));
[hdfs://192.x.x.x:9000/hadoop-2.7.1.tar.gz:0+134217728, 
hdfs://192.x.x.x:9000/hadoop-2.7.1.tar.gz:134217728+31987190]


3 LocatedBlocks

3.1 包名
org.apache.hadoop.hdfs.protocol.LocatedBlocks

3.2 調用處
在hdfs讀取文件時調用openInfo()方法,最終調用的是DFSInputStream的fetchLocatedBlocksAndGetLastBlockLength方法獲取塊信息LocatedBlocks。塊的信息非常詳盡,如塊名稱,大小,起始偏移量,IP地址等。

在hadoop中寫文件實際是把block寫入到datanode中,而namenode是通過datanode定期的彙報得知該文件到底由哪幾個block組成的。因此在讀某個文件時可能存在datanode還未彙報給namenode的情況,因此在讀文件時只能讀到最後一個彙報的block塊。isLastBlockComplete可以標識是否讀取到最後的塊。若不是則會根據元數據提供的block的pipeline來到datanode上獲得block的寫入長度,並賦值給lastBlockBeingWrittenLength。

3.3 格式

LocatedBlocks {
	fileLength = 210606807
		underConstruction = false
		blocks = [LocatedBlock {
				BP - 1853423215 - 192.X.X.X - 1474747765776: blk_1073741828_1004;
				getBlockSize() = 134217728;
				corrupt = false;
				offset = 0;
				locs = [192.X.X.X: 50010]
			}, LocatedBlock {
				BP - 1853423215 - 192.X.X.X - 1474747765776: blk_1073741829_1005;
				getBlockSize() = 76389079;
				corrupt = false;
				offset = 134217728;
				locs = [192.X.X.X: 50010]
			}
		]
		lastLocatedBlock = LocatedBlock {
		BP - 1853423215 - 192.X.X.X - 1474747765776: blk_1073741829_1005;
		getBlockSize() = 76389079;
		corrupt = false;
		offset = 134217728;
		locs = [192.X.X.X: 50010]
	}
	isLastBlockComplete = true
}

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章