HBase - KYlin build cube時出現問題的彙總

1.在單條非空數據時出現null值而導致報錯

Vertex failed, vertexName=Map 1, vertexId=vertex_1494251465823_0017_1_01, diagnostics=[Task failed, taskId=task_1494251465823_0017_1_01_000021, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"weibo_id":"3959784874733088","content":null,"json_file":null,"geohash":null,"user_id":"3190257607","time_id":null,"city_id":null,"province_id":null,"country_id":null,"unix_time":null,"pic_url":null,"lat":null,"lon":null}
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
    at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:194)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:185)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:185)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:181)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"weibo_id":"3959784874733088","content":null,"json_file":null,"geohash":null,"user_id":"3190257607","time_id":null,"city_id":null,"province_id":null,"country_id":null,"unix_time":null,"pic_url":null,"lat":null,"lon":null}
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:325)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
    ... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"weibo_id":"3959784874733088","content":null,"json_file":null,"geohash":null,"user_id":"3190257607","time_id":null,"city_id":null,"province_id":null,"country_id":null,"unix_time":null,"pic_url":null,"lat":null,"lon":null}
    at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:565)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
    ... 17 more

實際檢索Hive時發現該條微博是非空的

select * from check_in_table where weibo_id = '3959784874733088';

result:
3959784874733088    #清明祭英烈#今天的和平安定是先烈們用生命換來的,我們要珍惜今天的和平生活,努力學習,早日實現中國夢。 http://t.cn/R2dLEhU {...}   wtn901f5q32n    3190257607  1459526400000   1901    19    00    2016-04-02 12:02:26 0   28.31096    121.64364

修改星型模型得以解決

2.在Kylin Build時失敗,顯示了一個Hbase scan超時的錯誤

報錯如下:

Vertex re-running, vertexName=Map 2, vertexId=vertex_1494251465823_0016_1_00
Vertex failed, vertexName=Map 1, vertexId=vertex_1494251465823_0016_1_01, diagnostics=[Task failed, taskId=task_1494251465823_0016_1_01_000015, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: org.apache.hadoop.hbase.client.ScannerTimeoutException: 425752ms passed since the last invocation, timeout is currently set to 60000
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
    at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:194)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:185)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:185)
    at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:181)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: org.apache.hadoop.hbase.client.ScannerTimeoutException: 425752ms passed since the last invocation, timeout is currently set to 60000
    at ...

解決方法:

1方法.通過修改conf

Configuration conf = HBaseConfiguration.create()
conf.setLong(HConstants.HBASE_REGIONSERVER_LEASE_PERIOD_KEY,120000)

通過代碼實現修改 時間、這個值是在客戶端應用中配置的,我測試的時候是不會被傳遞到遠程region服務器,所以這樣的修改是無效的、不知是否人通過這種測試。

2方法直接修改配置文件

<property>
    <name>hbase.regionserver.lease.period</name>    
    <value>900000</value> 
    <!-- 900 000, 15 minutes -->  
</property>  
<property>    
    <name>hbase.rpc.timeout</name>    
    <value>900000</value> 
    <!-- 15 minutes -->  
</property>

3.#4 Step Name: Build Dimension Dictionary 時出現cardinality Too high

java.lang.RuntimeException: Failed to create dictionary on WEIBODATA.CHECK_IN_TABLE.USER_ID
    at org.apache.kylin.dict.DictionaryManager.buildDictionary(DictionaryManager.java:325)
    at org.apache.kylin.cube.CubeManager.buildDictionary(CubeManager.java:222)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:50)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:41)
    at org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:54)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
    at org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary -- cardinality: 11824431
    at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:96)
    at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:73)
    at org.apache.kylin.dict.DictionaryManager.buildDictionary(DictionaryManager.java:321)
    ... 14 more

result code:2

解決方法
手動修改Cube(JSON)
如果不修改,精確Count Distinct使用了Default dictionary來保存編碼後的user_id,而Default dictionary的最大容量爲500萬,並且,會爲每個Segment生成一個Default dictionary,這樣的話,跨天進行UV分析的時候,便會產生錯誤的結果,如果每天不重複的user_id超過500萬,那麼build的時候會報錯。

該值由參數 kylin.dictionary.max.cardinality 來控制,當然,你可以修改該值爲1億,但是Build時候可能會因爲內存溢出而導致Kylin Server掛掉。

Apache Kylin中對上億字符串的精確Count_Distinct示例
由於Global Dictionary 底層基於bitmap,其最大容量爲Integer.MAX_VALUE,即21億多,如果全局字典中,累計值超過Integer.MAX_VALUE,那麼在Build時候便會報錯。

其他請按實際業務需求配置。
手動修改Cube(JSON)
如果不修改,精確Count Distinct使用了Default dictionary來保存編碼後的user_id,而Default dictionary的最大容量爲500萬,並且,會爲每個Segment生成一個Default dictionary,這樣的話,跨天進行UV分析的時候,便會產生錯誤的結果,如果每天不重複的user_id超過500萬,那麼build的時候會報錯:
java.lang.IllegalArgumentException: Too high cardinality is not suitable for dictionary — cardinality: 43377845
at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:96)
at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:73)
該值由參數 kylin.dictionary.max.cardinality 來控制,當然,你可以修改該值爲1億,但是Build時候可能會因爲內存溢出而導致Kylin Server掛掉:

因此,這種需求我們需要手動使用Global Dictionary,顧名思義,它是一個全局的字典,不分Segments,同一個user_id,在全局字典中只有一個ID。

添加JSON字段

4.Dup key found問題

java.lang.IllegalStateException: Dup key found, key=[24], value1=[1456243200000,2016,02,24,0,1456272000000], value2=[1458748800000,2016,03,24,0,1458777600000]
    at org.apache.kylin.dict.lookup.LookupTable.initRow(LookupTable.java:85)
    at org.apache.kylin.dict.lookup.LookupTable.init(LookupTable.java:68)
    at org.apache.kylin.dict.lookup.LookupStringTable.init(LookupStringTable.java:79)
    at org.apache.kylin.dict.lookup.LookupTable.<init>(LookupTable.java:56)
    at org.apache.kylin.dict.lookup.LookupStringTable.<init>(LookupStringTable.java:65)
    at org.apache.kylin.cube.CubeManager.getLookupTable(CubeManager.java:674)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:60)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:41)
    at org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:54)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
    at org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecu`
or.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

result code:2

修改model結構
從有重複行的形式=》 無重複行

5.USER_ID全局dict構建失敗,提示/dict/WEIBODATA.USER_TABLE/USER_ID should have 0 or 1 append dict but 2

java.lang.RuntimeException: Failed to create dictionary on WEIBODATA.CHECK_IN_TABLE.USER_ID
    at org.apache.kylin.dict.DictionaryManager.buildDictionary(DictionaryManager.java:325)
    at org.apache.kylin.cube.CubeManager.buildDictionary(CubeManager.java:222)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:50)
    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:41)
    at org.apache.kylin.engine.mr.steps.CreateDictionaryJob.run(CreateDictionaryJob.java:54)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
    at org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
    at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: GlobalDict /dict/WEIBODATA.USER_TABLE/USER_ID should have 0 or 1 append dict but 2
    at org.apache.kylin.dict.GlobalDictionaryBuilder.build(GlobalDictionaryBuilder.java:68)
    at org.apache.kylin.dict.DictionaryGenerator.buildDictionary(DictionaryGenerator.java:81)
    at org.apache.kylin.dict.DictionaryManager.buildDictionary(DictionaryManager.java:323)
    ... 14 more

result code:2

這個網站 提到了這個問題,有一個解決了這個問題的人用的解決方法是:

1.檢查metadata

scan 'kylin_metadata', {STARTROW=>'/dict/WEIBODATA.USER_TABLE/USER_ID', ENDROW=> '/dict/WEIBODATA.USER_TABLE/USER_ID', FILTER=>"KeyOnlyFilter()"} 

使用該方法對dict的metadata進行檢查,該網站中說他這次scan出了兩個metadata,清理後就不報這個錯了
但是我掃描的時候的結果是

hbase(main):011:0> scan 'kylin_metadata', {STARTROW=>'/dict/WEIBODATA.USER_TABLE/USER_ID', ENDROW=> '/dict/WEIBODATA.USER_TABLE/USER_ID', FILTER=>"KeyOnlyFilter()"} 
ROW                                                                  COLUMN+CELL                                                                                                                                                                                              
0 row(s) in 0.0220 seconds

hbase(main):010:0> scan 'kylin_metadata', {STARTROW=>'/dict/WEIBODATA.USER_TABLE/USER_ID', ENDROW=> '/dict/WEIBODATA.USER_TABLE/USER_ID'}
ROW                                                                  COLUMN+CELL                                                                                                                                                                                              
0 row(s) in 0.0190 seconds

甚至沒有出現這一行
使用Kylin自帶的clean緩存功能

${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete false
${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete true

re-build無效

在那個網頁中發現其提到把USER_ID設置爲distinct_count方法,這一點跟我一樣,在去除這個count字段後再次rebuild

無效,證明不是這個的原因

暫時無法解決這個問題

6.Reduce階段出現OOM

2017-05-24 06:31:27,282 ERROR [main] org.apache.kylin.engine.mr.KylinReducer: 
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at org.apache.kylin.dict.TrieDictionaryBuilder$Node.reset(TrieDictionaryBuilder.java:60)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:125)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValueR(TrieDictionaryBuilder.java:155)
    at org.apache.kylin.dict.TrieDictionaryBuilder.addValue(TrieDictionaryBuilder.java:92)
    at org.apache.kylin.dict.TrieDictionaryForestBuilder.addValue(TrieDictionaryForestBuilder.java:97)
    at org.apache.kylin.dict.TrieDictionaryForestBuilder.addValue(TrieDictionaryForestBuilder.java:78)
    at org.apache.kylin.dict.DictionaryGenerator$StringTrieDictForestBuilder.addValue(DictionaryGenerator.java:212)
    at org.apache.kylin.engine.mr.steps.FactDistinctColumnsReducer.doReduce(FactDistinctColumnsReducer.java:197)
    at org.apache.kylin.engine.mr.steps.FactDistinctColumnsReducer.doReduce(FactDistinctColumnsReducer.java:60)
    at org.apache.kylin.engine.mr.KylinReducer.reduce(KylinReducer.java:48)
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
2017-05-24 06:31:44,672 ERROR [main] org.apache.kylin.engine.mr.KylinReducer: 
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.Arrays.copyOf(Arrays.java:3210)
    at java.util.Arrays.copyOf(Arrays.java:3181)
    at java.util.ArrayList.toArray(ArrayList.java:376)
    at java.util.LinkedList.addAll(LinkedList.java:408)
    at java.util.LinkedList.addAll(LinkedList.java:387)
    at java.util.LinkedList.<init>(LinkedList.java:119)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:384)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.buildTrieBytes(TrieDictionaryBuilder.java:424)
    at org.apache.kylin.dict.TrieDictionaryBuilder.build(TrieDictionaryBuilder.java:418)
    at org.apache.kylin.dict.TrieDictionaryForestBuilder.build(TrieDictionaryForestBuilder.java:109)
    at org.apache.kylin.dict.DictionaryGenerator$StringTrieDictForestBuilder.build(DictionaryGenerator.java:218)
    at org.apache.kylin.engine.mr.steps.FactDistinctColumnsReducer.doCleanup(FactDistinctColumnsReducer.java:231)
    at org.apache.kylin.engine.mr.KylinReducer.cleanup(KylinReducer.java:71)
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:179)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
2017-05-24 06:31:44,801 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.Arrays.copyOf(Arrays.java:3210)
    at java.util.Arrays.copyOf(Arrays.java:3181)
    at java.util.ArrayList.toArray(ArrayList.java:376)
    at java.util.LinkedList.addAll(LinkedList.java:408)
    at java.util.LinkedList.addAll(LinkedList.java:387)
    at java.util.LinkedList.<init>(LinkedList.java:119)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:384)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.checkOverflowParts(TrieDictionaryBuilder.java:400)
    at org.apache.kylin.dict.TrieDictionaryBuilder.buildTrieBytes(TrieDictionaryBuilder.java:424)
    at org.apache.kylin.dict.TrieDictionaryBuilder.build(TrieDictionaryBuilder.java:418)
    at org.apache.kylin.dict.TrieDictionaryForestBuilder.build(TrieDictionaryForestBuilder.java:109)
    at org.apache.kylin.dict.DictionaryGenerator$StringTrieDictForestBuilder.build(DictionaryGenerator.java:218)
    at org.apache.kylin.engine.mr.steps.FactDistinctColumnsReducer.doCleanup(FactDistinctColumnsReducer.java:231)
    at org.apache.kylin.engine.mr.KylinReducer.cleanup(KylinReducer.java:71)
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:179)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)

Reduce階段OOM:

  1. data skew 數據傾斜
    data skew是引發這個的一個原因。
    key分佈不均勻,導致某一個reduce所處理的數據超過預期,導致jvm頻繁GC。

  2. value對象過多或者過大
    某個reduce中的value堆積的對象過多,導致jvm頻繁GC。

解決辦法:

set reduce memory = 3.5GB

  1. 增加reduce個數,set mapred.reduce.tasks=300,。
  2. 在Hive-site.xml中設置,或者在hive shell裏設置 set mapred.child.java.opts = -Xmx512m
    或者只設置reduce的最大heap爲2G,並設置垃圾回收器的類型爲並行標記回收器,這樣可以顯著減少GC停頓,但是稍微耗費CPU。
    set mapred.reduce.child.java.opts=-Xmx2g -XX:+UseConcMarkSweepGC;
  3. 使用map join 代替 common join. 可以set hive.auto.convert.join = true
  4. 設置 hive.optimize.skewjoin = true 來解決數據傾斜問題

7.Demicial問題

2017-06-06 03:19:03,528 ERROR [pool-7-thread-1] org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder: Dogged Cube Build error
java.io.IOException: java.lang.NumberFormatException
    at org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder$BuildOnce.abort(DoggedCubeBuilder.java:197)
    at org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder$BuildOnce.checkException(DoggedCubeBuilder.java:169)
    at org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder$BuildOnce.build(DoggedCubeBuilder.java:116)
    at org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder.build(DoggedCubeBuilder.java:75)
    at org.apache.kylin.cube.inmemcubing.AbstractInMemCubeBuilder$1.run(AbstractInMemCubeBuilder.java:82)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NumberFormatException
    at java.math.BigDecimal.<init>(BigDecimal.java:550)
    at java.math.BigDecimal.<init>(BigDecimal.java:383)
    at java.math.BigDecimal.<init>(BigDecimal.java:806)
    at org.apache.kylin.measure.basic.BigDecimalIngester.valueOf(BigDecimalIngester.java:39)
    at org.apache.kylin.measure.basic.BigDecimalIngester.valueOf(BigDecimalIngester.java:29)
    at org.apache.kylin.cube.inmemcubing.InMemCubeBuilderInputConverter.buildValueOf(InMemCubeBuilderInputConverter.java:122)
    at org.apache.kylin.cube.inmemcubing.InMemCubeBuilderInputConverter.buildValue(InMemCubeBuilderInputConverter.java:94)
    at org.apache.kylin.cube.inmemcubing.InMemCubeBuilderInputConverter.convert(InMemCubeBuilderInputConverter.java:70)
    at org.apache.kylin.cube.inmemcubing.InMemCubeBuilder$InputConverter$1.next(InMemCubeBuilder.java:552)
    at org.apache.kylin.cube.inmemcubing.InMemCubeBuilder$InputConverter$1.next(InMemCubeBuilder.java:532)
    at org.apache.kylin.gridtable.GTAggregateScanner.iterator(GTAggregateScanner.java:141)
    at org.apache.kylin.cube.inmemcubing.InMemCubeBuilder.createBaseCuboid(InMemCubeBuilder.java:346)
    at org.apache.kylin.cube.inmemcubing.InMemCubeBuilder.build(InMemCubeBuilder.java:172)
    at org.apache.kylin.cube.inmemcubing.InMemCubeBuilder.build(InMemCubeBuilder.java:141)
    at org.apache.kylin.cube.inmemcubing.DoggedCubeBuilder$SplitThread.run(DoggedCubeBuilder.java:287)

一般由於設置measure時格式設置錯誤,回到cube中重新設計解決

8. Kylin 突然崩潰

一般情況是Kylin初始分配的4G內存無法滿足需求,在不斷的FullGC中崩潰,遇到這個問題需要在/bin/serenv.sh中修改kylin的內存分配以及垃圾回收策略

export KYLIN_JVM_SETTINGS="-Xms1024M -Xmx4096M -Xss1024K -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$KYLIN_HOME/logs/kylin.gc.$$ -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M"
# export KYLIN_JVM_SETTINGS="-Xms16g -Xmx16g -XX:MaxPermSize=512m -XX:NewSize=3g -XX:MaxNewSize=3g -XX:SurvivorRatio=4 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=70 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError"

上面是初始計劃,下面是修改的計劃

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章