kylin遇見的錯誤

kylin遇見的錯誤

0.普通問題

0.1

java.net.ConnectException: Call From MyDis/192.168.182.86 to MyDis:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
10020:這是配置的mr的jobhistory的端口:明顯history server沒開,或者出了問題
	/mr-jobhistory-daemon.sh start historyserver

0.2

運行ERROR,點擊日誌發現java.io.IOException: OS command error exit with return code: 64, error message: SLF4J: Class path contains multiple SLF4J bindings.
原因:第一個可能是:hive元數據沒有開啓,開啓hive元數據服務即可
第二個是SLF4Jar衝突

1.kylin-mr模式

1.1 org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Exception while unregistering

kylin的日誌文件報錯(

解決思路:kylinUi日誌->聚合日誌->yarn的日誌( ${HADOOP_HOME}/logs/uselogs/任務編號對應的日誌文件)):

我的問題是:hadoop之前的模式mr換成了tez(這個在kylin的模式下也有問題),
從tez換成了mr,kylin在中間表使用hive時mr出錯。

(解決:hadoop集羣mr的運行)(基本都是配置文件的問題。。。)
ERROR [Thread-65] org.apache.hadoop.mapreduce.
v2.app.rm.RMCommunicator: Exception while unregistering
java.lang.NullPointerException
        at org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.
        getApplicationWebURLOnJHSWithoutScheme(MRWebAppUtil.java:140)
      

1.2

tez運行時運行的問題

注:如果不是這個問題,我還沒發現我在之前的yarn-site.xml配置了yarn-tez這個(之前將mr換成tez)

這個其實很搞:使用yarn-tez執行的目錄是hdfs:/kylin/kylin_metadata/kylin-6faa47ea-3a5b-4020-976f-c9fcf9d93bd2/kylin_sales_cube/fact_distinct_columns

少了一層目錄;

這就是tez換成mr的原因

error:java.io.IOException: fail to find the statistics file 
in base dir: hdfs:/kylin/kylin_metadata/
kylin-6faa47ea-3a5b-4020-976f-c9fcf9d93bd2/
kylin_sales_cube/fact_distinct_columns/statistics
直接將tez換成mr
<property>
	<name>mapreduce.framework.name</name>
	<value>yarn</value>
</property>

2.kylin-spark模式

yarn的配置文件
<property>
	<name>yarn.nodemanager.aux-services</name>
	<value>spark_shuffle,mapreduce_shuffle</value>
</property>
<property>
	<name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
	<value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>
同時將spark-2.4.4-yarn-shuffle.jar放置到${HADOOP_HOME}/share/hadoop/yarn/lib中
spark-2.4.4-yarn-shuffle.jar位置:${SPARK_HOME}/yarn/
org.apache.spark.SparkException: Exception while starting container container_1575360418223_0002_02_000005 on host MyDis
	at org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:125)
	at org.apache.spark.deploy.yarn.ExecutorRunnable.run(ExecutorRunnable.scala:65)
	at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1$$anon$1.run(YarnAllocator.scala:534)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)


<--------------------------------注意這裏-------------------------------------->
Caused by: org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:spark_shuffle does not exist
	
	
	
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
	at org.apache.hadoop.yarn.client.api.impl.NMClientImpl.startContainer(NMClientImpl.java:205)
	at org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:122)
	... 5 more

參考鏈接:https://blog.csdn.net/qq_43008162/article/details/103355122

發佈了44 篇原創文章 · 獲贊 6 · 訪問量 2032
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章