DataNode上執行文件讀寫時報java.io.IOException: Bad connect ack with firstBadLink as 192.168.X.X錯誤解決記錄

 今天在集羣上看到有兩個任務跑失敗了:

    Err log:

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=<number>

In order to limit the maximum number of reducers:

set hive.exec.reducers.max=<number>

In order to set a constant number of reducers:

set mapreduce.job.reduces=<number>

java.io.IOException: Bad connect ack with firstBadLink as 192.168.44.57:50010

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1460)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

Job Submission failed with exception 'java.io.IOException(Bad connect ack with firstBadLink as 192.168.44.57:50010)'

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

MapReduce Jobs Launched:

Stage-Stage-1: Map: 16 Reduce: 8 Cumulative CPU: 677.62 sec HDFS Read: 1794135673 HDFS Write: 136964041 SUCCESS

Stage-Stage-5: Map: 16 Reduce: 8 Cumulative CPU: 864.95 sec HDFS Read: 1794135673 HDFS Write: 120770083 SUCCESS

Stage-Stage-4: Map: 70 Reduce: 88 Cumulative CPU: 5431.46 sec HDFS Read: 22519878178 HDFS Write: 422001541 SUCCESS

Total MapReduce CPU Time Spent: 0 days 1 hours 56 minutes 14 seconds 30 msec

task BFD_JOB_TASK_521_20150721041704 is complete.

     錯誤的大概意思是:Job在在運行Map的時候,map的輸出正準備往磁盤上寫的時候,報:

               java.io.IOException: Bad connect ack with firstBadLink as 192.168.44.57:5001了

     原因是:

              Datanode往hdfs上寫時,實際上是通過使用xcievers這個中間服務往linux上的文件系統上寫文件的。其實這個xcievers就是一些負責在DataNode和本地磁盤上讀,寫文件的線程。

     DataNode上Block越多,這個線程的數量就應該越多。然後問題來了,這個線程數有個上線(默認是配置的4096)。所以,當Datenode上的Block數量過多時,就會有些Block文件找不到

     線程來負責他的讀和寫工作了。所以就出現了上面的錯誤(寫塊失敗)。

     解決方案是:

              在hdfs-site.xml中添加:                                 

 <property>
                  <name>dfs.datanode.max.transfer.threads</name>
                  <value>16000</value>
 </property>

 

    Tips:

           這個漏洞還可能照成這個錯誤:

                                 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block
                                 blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes
                                 contain current block. Will get new block locations from namenode and retry...
發佈了27 篇原創文章 · 獲贊 0 · 訪問量 4萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章