hive任務優化-Current usage: 2.0 GB of 2 GB physical memory used; 4.0 GB of 16.2 GB virtual memory used.

目錄

錯誤背景

錯誤信息定位

client端日誌

APPlication日誌

map和reduce單個錯誤日誌

錯誤分析

解決方案

1. 取消虛擬內存的檢查(不建議):

2.增大mapreduce.map.memory.mb 或者 mapreduce.reduce.memory.mb (建議)

3.適當增大 yarn.nodemanager.vmem-pmem-ratio的大小

4.換成sparkSQL任務(騷的一比,強烈推薦)

小結


錯誤背景

          大概是job運行超過了map和reduce設置的內存大小,導致任務失敗 ,就是寫了一個hql語句運行在大數據平臺上面,發現報錯了。

錯誤信息定位

client端日誌

INFO  : converting to local hdfs://hacluster/tenant/yxs/product/resources/resources/jar/f3c06465-4af1-4756-894e-ce74ec11b9c3.jar
INFO  : Added [/opt/huawei/Bigdata/tmp/hivelocaltmp/session_resources/2d0a2efc-776c-4ccc-957d-927079862ab2_resources/f3c06465-4af1-4756-894e-ce74ec11b9c3.jar] to class path
INFO  : Added resources: [hdfs://hacluster/tenant/yxs/product/resources/resources/jar/f3c06465-4af1-4756-894e-ce74ec11b9c3.jar]
INFO  : Number of reduce tasks not specified. Estimated from input data size: 2
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:10
INFO  : Submitting tokens for job: job_1567609664100_85580
INFO  : Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hacluster
INFO  : Kind: HIVE_DELEGATION_TOKEN, Service: HiveServer2ImpersonationToken
INFO  : The url to track the job: https://yiclouddata03-szzb:26001/proxy/application_1567609664100_85580/
INFO  : Starting Job = job_1567609664100_85580, Tracking URL = https://yiclouddata03-szzb:26001/proxy/application_1567609664100_85580/
INFO  : Kill Command = /opt/huawei/Bigdata/FusionInsight_HD_V100R002C80SPC203/install/FusionInsight-Hive-1.3.0/hive-1.3.0/bin/..//../hadoop/bin/hadoop job  -kill job_1567609664100_85580
INFO  : Hadoop job information for Stage-6: number of mappers: 10; number of reducers: 2
INFO  : 2019-09-24 16:16:17,686 Stage-6 map = 0%,  reduce = 0%
INFO  : 2019-09-24 16:16:27,299 Stage-6 map = 20%,  reduce = 0%, Cumulative CPU 10.12 sec
INFO  : 2019-09-24 16:16:28,474 Stage-6 map = 30%,  reduce = 0%, Cumulative CPU 30.4 sec
INFO  : 2019-09-24 16:16:29,664 Stage-6 map = 70%,  reduce = 0%, Cumulative CPU 83.44 sec
INFO  : 2019-09-24 16:16:30,841 Stage-6 map = 90%,  reduce = 0%, Cumulative CPU 115.79 sec
INFO  : 2019-09-24 16:16:32,004 Stage-6 map = 91%,  reduce = 0%, Cumulative CPU 134.73 sec
INFO  : 2019-09-24 16:16:44,928 Stage-6 map = 92%,  reduce = 0%, Cumulative CPU 223.25 sec
INFO  : 2019-09-24 16:16:55,613 Stage-6 map = 93%,  reduce = 0%, Cumulative CPU 284.27 sec
INFO  : 2019-09-24 16:17:03,797 Stage-6 map = 94%,  reduce = 0%, Cumulative CPU 313.69 sec
INFO  : 2019-09-24 16:17:11,881 Stage-6 map = 90%,  reduce = 0%, Cumulative CPU 115.79 sec
INFO  : 2019-09-24 16:18:12,546 Stage-6 map = 90%,  reduce = 0%, Cumulative CPU 115.79 sec
INFO  : 2019-09-24 16:19:04,473 Stage-6 map = 91%,  reduce = 0%, Cumulative CPU 185.47 sec
INFO  : 2019-09-24 16:19:13,683 Stage-6 map = 92%,  reduce = 0%, Cumulative CPU 223.35 sec
INFO  : 2019-09-24 16:19:22,825 Stage-6 map = 93%,  reduce = 0%, Cumulative CPU 281.97 sec
INFO  : 2019-09-24 16:19:32,053 Stage-6 map = 94%,  reduce = 0%, Cumulative CPU 314.97 sec
INFO  : 2019-09-24 16:19:54,143 Stage-6 map = 95%,  reduce = 0%, Cumulative CPU 377.36 sec
INFO  : 2019-09-24 16:19:56,520 Stage-6 map = 90%,  reduce = 0%, Cumulative CPU 115.79 sec
INFO  : 2019-09-24 16:20:09,338 Stage-6 map = 91%,  reduce = 0%, Cumulative CPU 181.59 sec
INFO  : 2019-09-24 16:20:18,574 Stage-6 map = 92%,  reduce = 0%, Cumulative CPU 217.27 sec
INFO  : 2019-09-24 16:20:27,772 Stage-6 map = 93%,  reduce = 0%, Cumulative CPU 266.25 sec
INFO  : 2019-09-24 16:20:40,439 Stage-6 map = 94%,  reduce = 0%, Cumulative CPU 305.32 sec
INFO  : 2019-09-24 16:20:57,751 Stage-6 map = 90%,  reduce = 0%, Cumulative CPU 115.79 sec
INFO  : 2019-09-24 16:21:11,624 Stage-6 map = 91%,  reduce = 0%, Cumulative CPU 183.87 sec
INFO  : 2019-09-24 16:21:20,948 Stage-6 map = 92%,  reduce = 0%, Cumulative CPU 219.12 sec
INFO  : 2019-09-24 16:21:31,427 Stage-6 map = 93%,  reduce = 0%, Cumulative CPU 282.71 sec
INFO  : 2019-09-24 16:21:39,754 Stage-6 map = 94%,  reduce = 0%, Cumulative CPU 317.99 sec
INFO  : 2019-09-24 16:21:45,519 Stage-6 map = 100%,  reduce = 100%, Cumulative CPU 115.79 sec
INFO  : MapReduce Total cumulative CPU time: 1 minutes 55 seconds 790 msec
ERROR : Ended Job = job_1567609664100_85580 with errors
任務-T_6260893799950704_20190924161555945_1_1 運行失敗,失敗原因:java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
	at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:283)
	at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:379)
	at com.dtwave.dipper.dubhe.node.executor.runner.impl.Hive2TaskRunner.doRun(Hive2TaskRunner.java:244)
	at com.dtwave.dipper.dubhe.node.executor.runner.BasicTaskRunner.execute(BasicTaskRunner.java:100)
	at com.dtwave.dipper.dubhe.node.executor.TaskExecutor.run(TaskExecutor.java:32)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)


任務運行失敗(Failed)

       看完錯誤是不是一臉懵逼,兩眼茫然...懷疑人生,哈哈...

APPlication日誌

       看這個能看出啥錯誤呀,需要去yarn裏面看application任務運行日誌如下所示:

2019-09-24 16:16:27,712 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 3
2019-09-24 16:16:27,712 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e29_1567609664100_85580_01_000011 taskAttempt attempt_1567609664100_85580_m_000009_0
2019-09-24 16:16:27,713 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1567609664100_85580_m_000009_0
2019-09-24 16:16:27,713 INFO [ContainerLauncher #2] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : yiclouddata04-SZZB:26009
2019-09-24 16:16:27,997 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:2 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:10 AssignedReds:0 CompletedMaps:3 CompletedReds:0 ContAlloc:10 ContRel:0 HostLocal:8 RackLocal:1
2019-09-24 16:16:28,005 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000009
2019-09-24 16:16:28,006 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000011
2019-09-24 16:16:28,006 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000003
2019-09-24 16:16:28,006 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:125952, vCores:6>
2019-09-24 16:16:28,006 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:16:28,006 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:2 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:7 AssignedReds:0 CompletedMaps:3 CompletedReds:0 ContAlloc:10 ContRel:0 HostLocal:8 RackLocal:1
2019-09-24 16:16:28,006 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000008_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2019-09-24 16:16:28,006 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000009_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2019-09-24 16:16:28,006 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000007_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2019-09-24 16:16:28,557 INFO [IPC Server handler 7 on 27102] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1567609664100_85580_m_000006_0
2019-09-24 16:16:28,558 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Task Attempt attempt_1567609664100_85580_m_000006_0 finished. Firing CONTAINER_AVAILABLE_FOR_REUSE event to ContainerAllocator
2019-09-24 16:16:28,558 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1567609664100_85580_m_000006_0 TaskAttempt Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:28,558 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1567609664100_85580_m_000006_0
2019-09-24 16:16:28,558 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1567609664100_85580_m_000006 Task Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:28,559 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 4
2019-09-24 16:16:28,560 INFO [ContainerLauncher #5] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e29_1567609664100_85580_01_000007 taskAttempt attempt_1567609664100_85580_m_000006_0
2019-09-24 16:16:28,560 INFO [ContainerLauncher #5] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1567609664100_85580_m_000006_0
2019-09-24 16:16:28,560 INFO [ContainerLauncher #5] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : yiclouddata05-SZZB:26009
2019-09-24 16:16:28,851 INFO [IPC Server handler 10 on 27102] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1567609664100_85580_m_000005_0
2019-09-24 16:16:28,852 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Task Attempt attempt_1567609664100_85580_m_000005_0 finished. Firing CONTAINER_AVAILABLE_FOR_REUSE event to ContainerAllocator
2019-09-24 16:16:28,852 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1567609664100_85580_m_000005_0 TaskAttempt Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:28,852 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1567609664100_85580_m_000005_0
2019-09-24 16:16:28,852 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1567609664100_85580_m_000005 Task Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:28,853 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 5
2019-09-24 16:16:28,856 INFO [ContainerLauncher #8] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e29_1567609664100_85580_01_000008 taskAttempt attempt_1567609664100_85580_m_000005_0
2019-09-24 16:16:28,856 INFO [ContainerLauncher #8] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1567609664100_85580_m_000005_0
2019-09-24 16:16:28,856 INFO [ContainerLauncher #8] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : yiclouddata16-SZZB:26009
2019-09-24 16:16:28,986 INFO [IPC Server handler 16 on 27102] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1567609664100_85580_m_000004_0
2019-09-24 16:16:28,987 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Task Attempt attempt_1567609664100_85580_m_000004_0 finished. Firing CONTAINER_AVAILABLE_FOR_REUSE event to ContainerAllocator
2019-09-24 16:16:28,987 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1567609664100_85580_m_000004_0 TaskAttempt Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:28,987 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1567609664100_85580_m_000004_0
2019-09-24 16:16:28,988 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1567609664100_85580_m_000004 Task Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:28,989 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 6
2019-09-24 16:16:28,989 INFO [ContainerLauncher #6] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e29_1567609664100_85580_01_000005 taskAttempt attempt_1567609664100_85580_m_000004_0
2019-09-24 16:16:28,990 INFO [ContainerLauncher #6] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1567609664100_85580_m_000004_0
2019-09-24 16:16:28,990 INFO [ContainerLauncher #6] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : yiclouddata10-SZZB:26009
2019-09-24 16:16:29,006 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:2 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:7 AssignedReds:0 CompletedMaps:6 CompletedReds:0 ContAlloc:10 ContRel:0 HostLocal:8 RackLocal:1
2019-09-24 16:16:29,008 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000008
2019-09-24 16:16:29,009 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000007
2019-09-24 16:16:29,009 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000005_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2019-09-24 16:16:29,009 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:130048, vCores:8>
2019-09-24 16:16:29,009 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:16:29,009 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000006_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2019-09-24 16:16:29,009 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:2 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:5 AssignedReds:0 CompletedMaps:6 CompletedReds:0 ContAlloc:10 ContRel:0 HostLocal:8 RackLocal:1
2019-09-24 16:16:29,582 INFO [IPC Server handler 12 on 27102] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1567609664100_85580_m_000002_0
2019-09-24 16:16:29,584 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Task Attempt attempt_1567609664100_85580_m_000002_0 finished. Firing CONTAINER_AVAILABLE_FOR_REUSE event to ContainerAllocator
2019-09-24 16:16:29,584 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1567609664100_85580_m_000002_0 TaskAttempt Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:29,584 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1567609664100_85580_m_000002_0
2019-09-24 16:16:29,584 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1567609664100_85580_m_000002 Task Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:29,584 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 7
2019-09-24 16:16:29,585 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e29_1567609664100_85580_01_000010 taskAttempt attempt_1567609664100_85580_m_000002_0
2019-09-24 16:16:29,586 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1567609664100_85580_m_000002_0
2019-09-24 16:16:29,586 INFO [ContainerLauncher #4] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : yiclouddata14-SZZB:26009
2019-09-24 16:16:30,009 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:2 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:5 AssignedReds:0 CompletedMaps:7 CompletedReds:0 ContAlloc:10 ContRel:0 HostLocal:8 RackLocal:1
2019-09-24 16:16:30,013 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000010
2019-09-24 16:16:30,013 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000005
2019-09-24 16:16:30,013 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000002_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2019-09-24 16:16:30,013 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:134144, vCores:10>
2019-09-24 16:16:30,013 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:16:30,013 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:2 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:3 AssignedReds:0 CompletedMaps:7 CompletedReds:0 ContAlloc:10 ContRel:0 HostLocal:8 RackLocal:1
2019-09-24 16:16:30,013 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000004_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2019-09-24 16:16:30,416 INFO [IPC Server handler 6 on 27102] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1567609664100_85580_m_000001_0
2019-09-24 16:16:30,417 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Task Attempt attempt_1567609664100_85580_m_000001_0 finished. Firing CONTAINER_AVAILABLE_FOR_REUSE event to ContainerAllocator
2019-09-24 16:16:30,417 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1567609664100_85580_m_000001_0 TaskAttempt Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:30,417 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1567609664100_85580_m_000001_0
2019-09-24 16:16:30,418 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1567609664100_85580_m_000001 Task Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:30,418 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 8
2019-09-24 16:16:30,419 INFO [ContainerLauncher #3] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e29_1567609664100_85580_01_000004 taskAttempt attempt_1567609664100_85580_m_000001_0
2019-09-24 16:16:30,419 INFO [ContainerLauncher #3] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1567609664100_85580_m_000001_0
2019-09-24 16:16:30,419 INFO [ContainerLauncher #3] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : yiclouddata12-SZZB:26009
2019-09-24 16:16:30,440 INFO [IPC Server handler 7 on 27102] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1567609664100_85580_m_000003_0
2019-09-24 16:16:30,442 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Task Attempt attempt_1567609664100_85580_m_000003_0 finished. Firing CONTAINER_AVAILABLE_FOR_REUSE event to ContainerAllocator
2019-09-24 16:16:30,442 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1567609664100_85580_m_000003_0 TaskAttempt Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:30,442 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1567609664100_85580_m_000003_0
2019-09-24 16:16:30,442 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1567609664100_85580_m_000003 Task Transitioned from RUNNING to SUCCEEDED
2019-09-24 16:16:30,442 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 9
2019-09-24 16:16:30,443 INFO [ContainerLauncher #7] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e29_1567609664100_85580_01_000002 taskAttempt attempt_1567609664100_85580_m_000003_0
2019-09-24 16:16:30,446 INFO [ContainerLauncher #7] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1567609664100_85580_m_000003_0
2019-09-24 16:16:30,447 INFO [ContainerLauncher #7] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : yiclouddata11-SZZB:26009
2019-09-24 16:16:30,556 INFO [IPC Server handler 8 on 27102] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1567609664100_85580_m_31885837205506 asked for a task
2019-09-24 16:16:30,556 INFO [IPC Server handler 8 on 27102] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1567609664100_85580_m_31885837205506 is invalid and will be killed.
2019-09-24 16:16:31,013 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:2 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:3 AssignedReds:0 CompletedMaps:9 CompletedReds:0 ContAlloc:10 ContRel:0 HostLocal:8 RackLocal:1
2019-09-24 16:16:31,017 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000004
2019-09-24 16:16:31,017 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000002
2019-09-24 16:16:31,017 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:138240, vCores:12>
2019-09-24 16:16:31,017 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000001_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2019-09-24 16:16:31,017 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:16:31,017 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:2 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:9 CompletedReds:0 ContAlloc:10 ContRel:0 HostLocal:8 RackLocal:1
2019-09-24 16:16:31,017 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000003_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2019-09-24 16:16:34,026 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:128000, vCores:10>
2019-09-24 16:16:34,026 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:16:36,032 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:125952, vCores:9>
2019-09-24 16:16:36,032 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:16:47,061 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:115712, vCores:7>
2019-09-24 16:16:47,061 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:16:58,089 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:105472, vCores:5>
2019-09-24 16:16:58,090 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:16:59,092 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:84992, vCores:1>
2019-09-24 16:16:59,092 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:17:06,109 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:125952, vCores:9>
2019-09-24 16:17:06,109 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:17:08,113 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:115712, vCores:7>
2019-09-24 16:17:08,113 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:17:09,115 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:95232, vCores:3>
2019-09-24 16:17:09,115 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:17:10,117 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:84992, vCores:1>
2019-09-24 16:17:10,117 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:17:11,121 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1567609664100_85580_01_000006
2019-09-24 16:17:11,122 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:76800, vCores:0>
2019-09-24 16:17:11,122 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 10
2019-09-24 16:17:11,122 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:2 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:9 CompletedReds:0 ContAlloc:10 ContRel:0 HostLocal:8 RackLocal:1
2019-09-24 16:17:11,122 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1567609664100_85580_m_000000_0: Container [pid=44860,containerID=container_e29_1567609664100_85580_01_000006] is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB physical memory used; 4.0 GB of 16.2 GB virtual memory used. Killing container.
Dump of the process-tree for container_e29_1567609664100_85580_01_000006 :
	|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
	|- 44881 44860 44860 44860 (java) 21865 1198 4183670784 526521 /opt/huawei/Bigdata/common/runtime0/jdk1.8.0_162//bin/java -Djava.security.auth.login.config=/opt/huawei/Bigdata/FusionInsight_Current/1_11_NodeManager/etc/jaas-zk.conf -Dzookeeper.server.principal=zookeeper/hadoop.hadoop.com -Dzookeeper.request.timeout=120000 -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Xmx2048M -Djava.net.preferIPv4Stack=true -Djava.security.krb5.conf=/opt/huawei/Bigdata/common/runtime/krb5.conf -Djava.io.tmpdir=/srv/BigData/hadoop/data6/nm/localdir/usercache/yxs_product/appcache/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/srv/BigData/hadoop/data10/nm/containerlogs/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.240.250.133 27102 attempt_1567609664100_85580_m_000000_0 31885837205510 
	|- 44860 44857 44860 44860 (bash) 2 1 116031488 374 /bin/bash -c /opt/huawei/Bigdata/common/runtime0/jdk1.8.0_162//bin/java -Djava.security.auth.login.config=/opt/huawei/Bigdata/FusionInsight_Current/1_11_NodeManager/etc/jaas-zk.conf -Dzookeeper.server.principal=zookeeper/hadoop.hadoop.com -Dzookeeper.request.timeout=120000 -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Xmx2048M -Djava.net.preferIPv4Stack=true -Djava.security.krb5.conf=/opt/huawei/Bigdata/common/runtime/krb5.conf -Djava.io.tmpdir=/srv/BigData/hadoop/data6/nm/localdir/usercache/yxs_product/appcache/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/srv/BigData/hadoop/data10/nm/containerlogs/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.240.250.133 27102 attempt_1567609664100_85580_m_000000_0 31885837205510 1>/srv/BigData/hadoop/data10/nm/containerlogs/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006/stdout 2>/srv/BigData/hadoop/data10/nm/containerlogs/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006/stderr  

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

map和reduce單個錯誤日誌

          然後我其實還是沒有看出來有啥子錯誤,繼續找詳細看map和reduce報錯信息:

錯誤日誌如下

Container [pid=44860,containerID=container_e29_1567609664100_85580_01_000006] is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB physical memory used; 4.0 GB of 16.2 GB virtual memory used. Killing container. Dump of the process-tree for container_e29_1567609664100_85580_01_000006 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 44881 44860 44860 44860 (java) 21865 1198 4183670784 526521 /opt/huawei/Bigdata/common/runtime0/jdk1.8.0_162//bin/java -Djava.security.auth.login.config=/opt/huawei/Bigdata/FusionInsight_Current/1_11_NodeManager/etc/jaas-zk.conf -Dzookeeper.server.principal=zookeeper/hadoop.hadoop.com -Dzookeeper.request.timeout=120000 -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Xmx2048M -Djava.net.preferIPv4Stack=true -Djava.security.krb5.conf=/opt/huawei/Bigdata/common/runtime/krb5.conf -Djava.io.tmpdir=/srv/BigData/hadoop/data6/nm/localdir/usercache/yxs_product/appcache/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/srv/BigData/hadoop/data10/nm/containerlogs/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.240.250.133 27102 attempt_1567609664100_85580_m_000000_0 31885837205510 |- 44860 44857 44860 44860 (bash) 2 1 116031488 374 /bin/bash -c /opt/huawei/Bigdata/common/runtime0/jdk1.8.0_162//bin/java -Djava.security.auth.login.config=/opt/huawei/Bigdata/FusionInsight_Current/1_11_NodeManager/etc/jaas-zk.conf -Dzookeeper.server.principal=zookeeper/hadoop.hadoop.com -Dzookeeper.request.timeout=120000 -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Xmx2048M -Djava.net.preferIPv4Stack=true -Djava.security.krb5.conf=/opt/huawei/Bigdata/common/runtime/krb5.conf -Djava.io.tmpdir=/srv/BigData/hadoop/data6/nm/localdir/usercache/yxs_product/appcache/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/srv/BigData/hadoop/data10/nm/containerlogs/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.240.250.133 27102 attempt_1567609664100_85580_m_000000_0 31885837205510 1>/srv/BigData/hadoop/data10/nm/containerlogs/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006/stdout 2>/srv/BigData/hadoop/data10/nm/containerlogs/application_1567609664100_85580/container_e29_1567609664100_85580_01_000006/stderr Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143

Container [pid=44860,containerID=container_e29_1567609664100_85580_01_000006] is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB physical memory used; 4.0 GB of 16.2 GB virtual memory used. Killing container.

ok,看到這裏終於找到錯誤原因了。

錯誤分析

首先檢查yarn上面配置信息

ERROR:Container [pid=44860,containerID=container_e29_1567609664100_85580_01_000006] is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB physical memory used; 4.0 GB of 16.2 GB virtual memory used. Killing container.

2.0 GB:任務所佔的物理內存
2GB: mapreduce.map.memory.mb 參數默認設置大小
4.0 GB:程序佔用的虛擬內存
16.2 GB: mapreduce.map.memory.mb 乘以 yarn.nodemanager.vmem-pmem-ratio 得到的

其中 yarn.nodemanager.vmem-pmem-ratio 是 虛擬內存和物理內存比例,在yarn-site.xml中設置,默認是2.1

很明顯,container需要佔用了超過了任務的物理內存限制(running beyond physical memory limits)。所以kill掉了這個container。

上面只是map中產生的報錯,當然也有可能在reduce中報錯,如果是reduce中,那麼就是mapreduce.reduce.memory.db * yarn.nodemanager.vmem-pmem-ratio

物理內存:真實的硬件設備(內存條)
虛擬內存:利用磁盤空間虛擬出的一塊邏輯內存,用作虛擬內存的磁盤空間被稱爲交換空間(Swap Space)。(爲了滿足物理內存的不足而提出的策略)
linux會在物理內存不足時,使用交換分區的虛擬內存。內核會將暫時不用的內存塊信息寫到交換空間,這樣以來,物理內存得到了釋放,這塊內存就可以用於其它目的,當需要用到原始的內容時,這些信息會被重新從交換空間讀入物理內存。

解決方案

1. 取消虛擬內存的檢查(不建議):

在yarn-site.xml或者程序中中設置yarn.nodemanager.vmem-check-enabled爲false

<property>
  <name>yarn.nodemanager.vmem-check-enabled</name>
  <value>false</value>
  <description>Whether virtual memory limits will be enforced for containers.</description>
</property>

除了物理內存超了,也有可能是虛擬內存超了,同樣也可以設置物理內存的檢查爲 

yarn.nodemanager.pmem-check-enabled :false

個人認爲這種辦法並不太好,如果程序有內存泄漏等問題,取消這個檢查,可能會導致集羣崩潰。

2.增大mapreduce.map.memory.mb 或者 mapreduce.reduce.memory.mb (建議)

3.適當增大 yarn.nodemanager.vmem-pmem-ratio的大小

        爲物理內存增大對應的虛擬內存, 但是這個參數也不能太離譜

4.換成sparkSQL任務(騷的一比,強烈推薦)

小結

          任務內存問題,主要分爲兩塊,一塊是物理內存,一塊是虛擬內存,哪個超過了任務都會報錯的,適當地修改對應的參數,就可以將任務繼續運行了。如果任務所佔用的內存太過離譜,更多考慮的應該是程序是否有內存泄漏,是否存在數據傾斜等,優先程序解決此類問題。終極解法:拆分數據,將數據均分成多個任務,進行操作~ 

或者選擇spark哦~

6 的飛起!!!

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章