There is insufficient memory for the Java Runtime Environment to continue. 解決

在Centos  6.4 X64, JDK 1.7 U21下用hadoop 1.2.1 運行 mahout 0.9,處理一個5GB的數據,系統提示There is insufficient memory for the Java Runtime Environment to continue.


14/07/15 08:46:05 INFO mapred.JobClient: Task Id : attempt_201407141818_0002_m_000018_0, Status : FAILED
java.lang.Throwable: Child Error
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


attempt_201407141818_0002_m_000018_0: #
attempt_201407141818_0002_m_000018_0: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201407141818_0002_m_000018_0: # Cannot create GC thread. Out of system resources.
attempt_201407141818_0002_m_000018_0: # An error report file with more information is saved as:
attempt_201407141818_0002_m_000018_0: # /home/hadoop/hd_space/mapred/local/taskTracker/hadoop/jobcache/job_201407141818_0002/attempt_201407141818_0002_m_000018_0/work/hs_err_pid25377.log
14/07/15 08:46:07 INFO mapred.JobClient:  map 15% reduce 0%
14/07/15 08:46:09 INFO mapred.JobClient:  map 16% reduce 0%
14/07/15 08:46:09 INFO mapred.JobClient: Task Id : attempt_201407141818_0002_m_000018_1, Status : FAILED
java.lang.Throwable: Child Error
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


attempt_201407141818_0002_m_000018_1: #
attempt_201407141818_0002_m_000018_1: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201407141818_0002_m_000018_1: # Cannot create GC thread. Out of system resources.
attempt_201407141818_0002_m_000018_1: # An error report file with more information is saved as:


查看系統限制

[root@NameNode ~]# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 2066288
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8

文件數太少了。查看系統的/etc/security/limit.conf,etc/sysctl.conf ,換JDK版本等等,均無果!

在Root下設置 ulimit -c unlimited後,仍然不行。

[hadoop@NameNode mahout-distribution-0.9]$  ulimit -a

max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited

經過查證,再在/etc/security/下一看。centos6多出來一個limits.d目錄,下面有個文件: 90-nproc.config
此文件內容:
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.


*          soft    nproc     1024
root       soft    nproc     unlimited
這裏限制了1024呀,果斷註釋。

問題解決。




發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章