spark讀取hive問題彙總

hive的計算引擎是tez,該如何配置?

spark讀取hive的數據報錯,按照網上的說明,將hive的conf目錄下的hive-site.xml複製到spark的conf目錄下,並添加上hive的metastore。

<property>
        <name>hive.metastore.uris</name>
        <value>thrift://hadoop102:9083</value>
</property>

然後啓動hive metastore服務:

bin/hive --service metastore。

 

然後啓動spark讀取hive的內容時還是報錯:

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:529)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:114)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.tez.dag.api.SessionNotRunning
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 12 more

 

主要是spark和tez引擎衝突了,所以把spark的conf目錄下的hive計算引擎改爲原來的mapreduce就行了。

<property>
       <name>hive.execution.engine</name>
       <value>mr</value>
</property>

 

hive的表是lzo壓縮的,查詢表的數據報錯,該怎麼解決?

Caused by: java.lang.reflect.InvocationTargetException: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
  ... 92 more

這裏報錯說明的是沒有找到lzo的壓縮包,網上有說的在conf下的spark_env.sh裏面加上lzo的配置目錄。

 

export SPARK_CLASSPATH=$SPARK_CLASSPATH:/opt/hadoop/share/hadoop/share/hadoop/common/*

我配置了之後啓動報錯,然後說spark2.x已經淘汰了SPARK_CLASSPATH,建議使用--driver-class-path。

我加上--driver-class-path啓動:

 bin/spark-shell --master yarn --deploy-mode client --driver-class-path  /opt/module/hadoop-2.7.2/share/hadoop/common/*

啓動之後還是報錯:

java.lang.NoSuchMethodError: org.apache.hadoop.io.retry.RetryPolicies.retryOtherThanRemoteException(Lorg/apache/hadoop/io/retry/RetryPolicy;Ljava/util/Map;)Lorg/apache/hadoop/io/retry/RetryPolicy;

之後我選擇具體的lzo包:

 bin/spark-shell --master yarn --deploy-mode client --driver-class-path  /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar

正常啓動。


spark讀取hbase關聯的hive表報錯

error in initSerDe: java.lang.ClassNotFoundException Class org.apache.hadoop.hive.hbase.HBaseSerDe not found java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.hbase.HBaseSerDe not found。

將hbase/lib目錄下這幾個jar包複製到spark的jar目錄下。

  • hbase-protocol-1.1.2.jar
  • hbase-client-1.1.2.jar
  • hbase-common-1.1.2.jar
  • hbase-server-1.1.2.jar
  • hive-hbase-handler-1.2.1.jar
  • metrics-core-2.2.0.jar

然後將hbase-site.xml拷貝到spark的conf目錄下,重新啓動spark-shell即可。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章