hbase Could not seek StoreFileScanner[HFileScanner for reader異常

最近hbase查詢出現異常,hbase監控都正常,異常如下:

hbase(main):003:0> get 'w:t','xxxx'
COLUMN                                                      CELL                                                                                                                                                                         

ERROR: java.io.IOException: Could not seek StoreFileScanner[HFileScanner for reader reader=hdfs://hbasens/hbase/data/w/t/cb47830264f77f7/fam/4c0ee7475d31400, compression=none, cacheConf=blockCache=LruBlockCache{blockCount=301220, currentSize=20400963512, freeSize=1073872968, maxSize=21474836480, heapSize=20400963512, minSize=20401094656, minFactor=0.95, multiSize=10200547328, multiFactor=0.5, singleSize=5100273664, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, firstKey=1007aebeaf667f0f4f9a51d0f3fc6238/fam:c_id/1583256051687/Put, lastKey=xxxx/fam:type_id/1583353298724/Put, avgKeyLen=59, avgValueLen=18, entries=210762, length=18903885, cur=null] to key xxxxx/fam:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/seqid=0
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:218)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:350)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:199)
        at org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:2123)
        at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2113)
        at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:5682)
        at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2637)
        at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2623)
        at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2604)
        at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6968)
        at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6927)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2027)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33644)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
Caused by: java.io.IOException: On-disk size without header provided is 65596, but block header contains 0. Block offset: 3872612, data starts with: \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
        at org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:526)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:92)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1705)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1548)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:446)
        at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:266)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:643)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:593)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:297)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:200)
        ... 16 more

hdfs fsck檢查文件正常,排查磁盤壞道引起,排除壞道磁盤,重啓datanode服務正常。

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章