Web上傳文件到HDFS報錯java.io.IOException: File xx could only be written to 0 of the

問題描述

HDFS部署在阿里雲服務器,使用web上傳文件到阿里雲服務器的HDFS中報錯:java.io.IOException: File /17035041/服務器測試/test4.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
報錯如下圖:

2020-06-30 18:22:44,035 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: DFSClient_NONMAPREDUCE_437409007_30, pending creates: 1] has expired hard limit
2020-06-30 18:22:44,036 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  Holder: DFSClient_NONMAPREDUCE_437409007_30, pending creates: 1], src=/17035041/服務器測試/test1.txt
2020-06-30 18:22:44,036 WARN org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All existing blocks are COMPLETE, lease removed, file /17035041/服務器測試/test1.txt closed.
2020-06-30 18:22:44,036 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 10
2020-06-30 18:33:49,414 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 12
2020-06-30 18:33:49,418 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741861_1048, replicas=xx.xx.xx.xx:9866 for /17035041/服務器測試/test4.txt
2020-06-30 18:33:49,426 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-06-30 18:33:49,426 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-06-30 18:33:49,426 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-06-30 18:33:49,426 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on default port 9000, call Call#44 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 39.98.61.186:33354
java.io.IOException: File /17035041/服務器測試/test4.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)

問題解決:

1、關閉服務器防火牆
2、將Hadoop的常用默認端口添加到安全組,比如9000 50070 9866 等
3、(重要,我是這麼解決的)
看日誌發現hadoop隨機分配端口給datanode來進行操作,如果使用ip地址加端口號來操作,會出現端口號不在安全組而被網絡安全規則組織的情況,所以需要配置本地機器來操作,在hdfs-site.xml中添加最下面一行的配置:

        <!-- 指定HDFS副本的數量 -->
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.name.dir</name>
                <value>/usr/local/hadoop/data/name</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/usr/local/hadoop/data/data</value>
        </property>
        <property>
                <name>dfs.datanode.http.address</name>
                <value>chxy:9864</value>
        </property>
        <property>
                <name>dfs.client.use.dataname.hostname</name>
                <value>true</value>
        </property>

並在客戶端(web端)配置hosts文件做ip和主機名映射:

ip  主機名
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章