使用 hdfs dfs -put 報錯 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /test/a.txt._C

原因:可能是多次格式化
查看hadoop集羣的磁盤

 hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

磁盤空間都爲0

解決方法:

重新集羣初始化
1.先結束集羣:

./stop-all.sh
  1. 刪除在hdfs中配置的data目錄
rm -rf /home/hadoop/data/name/*  

rm -rf /home/hadoop/data/hdfs/edits/*  

rm -rf /home/hadoop/data/datanode/*  

rm -rf /home/hadoop/data/journaldata/jn/*  

rm -rf /home/hadoop/data/tmp/*

重新格式化namenode(切換到hadoop目錄下的bin目錄下)

bin/hdfs namenode -format              / /namenode 格式化  
bin/hdfs zkfc -formatZK                 //格式化高可用  
$bin/hdfs namenode

完成
開啓集羣

sbin/start-dfs.sh
hadoop dfsadmin -report

Instead use the hdfs command for it.

Configured Capacity: 84088270848 (78.31 GB)
Present Capacity: 80024354945 (74.53 GB)
DFS Remaining: 80024342528 (74.53 GB)
DFS Used: 12417 (12.13 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0


Live datanodes (3):

Name: 192.168.1.104:50010 (CDHNode4)
Hostname: CDHNode4
Decommission Status : Normal
Configured Capacity: 28029423616 (26.10 GB)
DFS Used: 4139 (4.04 KB)
Non DFS Used: 1322291157 (1.23 GB)
DFS Remaining: 26707128320 (24.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 95.28%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
解決!!
若有疑問,敬請指出!![email protected]

發佈了31 篇原創文章 · 獲贊 17 · 訪問量 1萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章