提交到yarn框架計算的作業報錯
//0,報錯內容
我們hadoop-2.7集羣用的執行引擎不是Tez,而是mr(是老集羣)
Error: Java heap space
Container killed by the ApplicationMaster.
//1,查找報錯日誌
[root@ my-hadoop-cluster hive]# grep -C 3 –color “log.dir” {HIVE_HOME}/conf/hive-log4j.properties
# Define some default values that can be overridden by system properties
hive.log.threshold=ALL
hive.root.logger=INFO,DRFA
hive.log.dir=/mnt/log/hive/scratch/${user.name}
hive.log.file=hive.log
# Define the root logger to the system property "hadoop.root.logger".
--
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hive.log.dir}/${hive.log.file}
# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
//2,進到hive日誌所在目錄,查看hive.log
2018-08-05 13:26:28,570 ERROR [Thread-35]: exec.Task (SessionState.java:printError(948)) -
Task with the most failures(4):
-----
Task ID:
task_1532952070023_22931_r_000852
URL:
http:// my-hadoop-cluster:8088/taskdetails.jsp?jobid=job_1532952070023_22931&tipid=task_1532952070023_22931_r_000852
-----
Diagnostic Messages for this Task:
Error: Java heap space
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
2018-08-05 13:26:28,649 INFO [main]: impl.YarnClientImpl (YarnClientImpl.java:killApplication(401)) - Killed application application_1532952070023_229
31
//3,從錯誤日誌中的Diagnostic Messages for this Task做分析
此reduce階段的任務被AM殺掉了。原因是該任務的container使用的java堆內存超出了限制。具體是什麼限制?可以查找mapred-site.xml 中MapReduce作業對reduce階段的堆內存限制,我這裏的是做了2Gb的限制(默認1Gb):
[root@my-hadoop-cluster conf]# grep -iC 2 –color “reduce.memory.mb” mapred-site.xml
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>2048</value>
</property>
展開來分析原因,task是運行在container上面的,也就是運行在JVM中,而此作業的reduce任務所對應的JVM堆內存大小已經超過2G。爲什麼超出2Gb?可能是產生的對象太多,佔滿了heap space;也可能是堆size太小,可以根據作業需求適當調大一些,例如3072
//4,解決辦法
要麼增大reduce.memory.mb 要麼減小計算的數據量(可通過適當增加reducer個數來把算力分散到各節點)
//5,MR作業的內存分配簡介
$ cd {HIVE_HOME}/conf/
$ grep -iEC 2 --color "map.java.opts|reduce.java.opts" mapred-site.xml
<property>
<!—當前container下的java子進程(map task)中的JVM可用堆內存上限,超限會爆OOM-->
<name>mapreduce.map.java.opts</name>
<value>-Xmx800m -verbose:gc -Xloggc:/tmp/@[email protected]</value>
</property>
--
<property>
<!—當前container下的java子進程(reduce task)中的JVM可用堆內存上限,超限會爆OOM-->
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx1736m -verbose:gc -Xloggc:/tmp/@[email protected]</value>
</property>
$ grep -iEC 2 --color "reduce.memory.mb|map.memory.mb" mapred-site.xml
<property>
<!—該值是container內存上限,由NM監控,一旦超限會被NM殺掉. mapreduce.map.java.opts須小於該值-->
<name>mapreduce.map.memory.mb</name>
<value>512</value>
</property>
--
<property>
<!—當前container下的java子進程(reduce task)中的JVM可用堆內存上限,超限會AM殺掉-->
<name>mapreduce.reduce.memory.mb</name>
<value>2048</value>
</property>