【轉】【HDFS】hive任務報HDFS異常:last block does not have enough number of replicas

HIVE運行查詢腳本時報錯,last block does not have enough number of replicas:

  1 2018-10-15 2018-07-17
  2 2018-10-15 10:00:01
  3 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
  4 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
  5 
  6 Logging initialized using configuration in jar:file:/data/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/jars/hive-common-1.1.0-cdh5.11.0.jar!/hive-log4j.properties
  7 Query ID = work_20181015100000_e24dc755-be3e-4d26-b088-f7195d4a9f6d
  8 Total jobs = 1
  9 Stage-1 is selected by condition resolver.
 10 Launching Job 1 out of 1
 11 Number of reduce tasks not specified. Estimated from input data size: 1099
 12 In order to change the average load for a reducer (in bytes):
 13   set hive.exec.reducers.bytes.per.reducer=<number>
 14 In order to limit the maximum number of reducers:
 15   set hive.exec.reducers.max=<number>
 16 In order to set a constant number of reducers:
 17   set mapreduce.job.reduces=<number>
 18 java.io.IOException: Unable to close file because the last block BP-1541923511-10.28.4.4-1501148646603:blk_1906958696_833801584 does not have enough number of replicas.
 19     at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2705)
 20     at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2667)
 21     at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2621)
 22     at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
 23     at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
 24     at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
 25     at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
 26     at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369)
 27     at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
 28     at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
 29     at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:203)
 30     at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128)
 31     at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99)
 32     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194)
 33     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
 34     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
 35     at java.security.AccessController.doPrivileged(Native Method)
 38     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
 39     at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
 40     at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
 41     at java.security.AccessController.doPrivileged(Native Method)
 42     at javax.security.auth.Subject.doAs(Subject.java:422)
 43     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
 44     at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
 45     at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
 46     at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:418)
 47     at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142)
 48     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
 49     at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
 50     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979)
 51     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692)
 52     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
 53     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
 54     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
 55     at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
 56     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
 57     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
 58     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
 59     at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:720)
 60     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
 61     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
 62     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 63     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 64     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 65     at java.lang.reflect.Method.invoke(Method.java:498)
 66     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 67     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 68 Job Submission failed with exception 'java.io.IOException(Unable to close file because the last block BP-1541923511-10.28.4.4-1501148646603:blk_1906958696_833801584 does not have enough n
    umber of replicas.)' 69 FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

參考: 【HDFS】hive任務報HDFS異常:last block does not have enough number of replicas,知是hadoop服務器負載過大引起,重新執行HIVE SQL腳本即可。若要徹底解決問題,則需要

建議降低任務併發量或者控制cpu使用率來減輕網絡的傳輸,使得DN能順利向NN彙報block情況。 

問題結論:

減輕系統負載。集羣發生的時候負載很重,CPU的32個核(100%)全部分配跑MR認爲了,至少要留20%的CPU

發佈了74 篇原創文章 · 獲贊 32 · 訪問量 26萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章