spark sql on hive初探

    前一段時間由於shark項目停止更新,sql on spark拆分爲兩個方向,一個是spark sql on hive,另一個是hive on spark。hive on spark達到可用狀態估計還要等很久的時間,所以打算試用下spark sql on hive,用來逐步替代目前mr on hive的工作。

    當前試用的版本是spark1.0.0,如果要支持hive,必須重新進行編譯,編譯的命令有所變化  

export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
mvn -Pyarn -Phive -Dhadoop.version=2.3.0-cdh5.0.0 -DskipTests clean package

     寫了段比較簡單的代碼

    val conf = new SparkConf().setAppName("SqlOnHive")
    val sc = new SparkContext(conf)
    val hiveContext = new HiveContext(sc)
    import hiveContext._
    hql("FROM tmp.test SELECT id limit 1").foreach(println)

    編譯後export出jar文件,使用standalone模式,故採用java -cp的方式提交,提交之前需要將hive-site.xml文件copy到$SPARK_HOME/conf目錄下

    java -XX:PermSize=256M -cp /home/hadoop/hql.jar com.yintai.spark.sql.SqlOnHive spark://h031:7077

    提交後會報異常

java.lang.RuntimeException: Error in configuring object
	at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
	at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:155)
	at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:187)
	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:181)
	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:93)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.Task.run(Task.scala:51)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
	... 27 more
Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
	at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:135)
	at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:175)
	at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
	... 32 more
Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found
	at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1801)
	at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
	... 34 more

    解決辦法是需要設置相關的環境變量,在spark-env.sh中設置

    SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/path/to/your/hadoop-lzo/libs/native
    SPARK_CLASSPATH=$SPARK_CLASSPATH:/path/to/your/hadoop-lzo/java/libs

    修改過環境變量之後重新提交,繼續報錯

14/07/23 10:25:19 ERROR RetryingHMSHandler: NoSuchObjectException(message:There is no database named tmp)
        at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:431)
        at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:441)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
        at com.sun.proxy.$Proxy9.getDatabase(Unknown Source)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
        at com.sun.proxy.$Proxy10.get_database(Unknown Source)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:810)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
        at com.sun.proxy.$Proxy11.getDatabase(Unknown Source)
        at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1139)
        at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1128)
        at org.apache.hadoop.hive.ql.exec.DDLTask.switchDatabase(DDLTask.java:3479)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:237)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
        at org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:185)
        at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:160)
        at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:249)
        at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:246)
        at org.apache.spark.sql.hive.HiveContext.hiveql(HiveContext.scala:85)
        at org.apache.spark.sql.hive.HiveContext.hql(HiveContext.scala:90)
        at com.yintai.spark.sql.SqlOnHive$.main(SqlOnHive.scala:20)
        at com.yintai.spark.sql.SqlOnHive.main(SqlOnHive.scala)

14/07/23 10:25:19 ERROR DDLTask: org.apache.hadoop.hive.ql.metadata.HiveException: Database does not exist: tmp
        at org.apache.hadoop.hive.ql.exec.DDLTask.switchDatabase(DDLTask.java:3480)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:237)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
        at org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:185)
        at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:160)
        at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:249)
        at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:246)
        at org.apache.spark.sql.hive.HiveContext.hiveql(HiveContext.scala:85)
        at org.apache.spark.sql.hive.HiveContext.hql(HiveContext.scala:90)
        at com.yintai.spark.sql.SqlOnHive$.main(SqlOnHive.scala:20)
        at com.yintai.spark.sql.SqlOnHive.main(SqlOnHive.scala)

    造成這個錯誤的原因就是spark程序無法加載到hive-site.xml,從而無法獲取到遠程metastore服務的地址,只能在本地的derby數據庫中查找,自然找不到相關庫表的元數據信息。spark sql實際上是通過實例化HiveConf類來加載hive-site.xml文件的,這個跟hive cli的方式是一致的,代碼如下

    ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
    if (classLoader == null) {
      classLoader = HiveConf.class.getClassLoader();
    }
    hiveDefaultURL = classLoader.getResource("hive-default.xml");
    // Look for hive-site.xml on the CLASSPATH and log its location if found.
    hiveSiteURL = classLoader.getResource("hive-site.xml");

    使用java -cp提交的方式不能正確的設置環境變量,在1.0.0版本中,新增了使用spark-submit腳本進行提交的方式,後來改爲使用此腳本來提交

/usr/lib/spark/bin/spark-submit --class com.yintai.spark.sql.SqlOnHive \
    --master spark://h031:7077 \
    --executor-memory 1g \
    --total-executor-cores 1 \
    /home/hadoop/hql.jar

    這個腳本在提交過程中會設置SparkConf中的spark.executor.extraClassPath和spark.driver.extraClassPath屬性,從而保證可以正確加載到所需的配置文件,到此測試成功。

    目前spark sql on hive兼容了大部分hive的語法和UDF,在SQL解析的時候使用了Catalyst框架,作業在運行效率上高出hive很多,不過目前的版本還存在一些BUG,穩定性上會有一些問題,需要等到新的穩定版發佈再進行進一步的測試。


    參考資料

    http://spark.apache.org/docs/1.0.0/sql-programming-guide.html

    http://hsiamin.com/posts/2014/05/03/enable-lzo-compression-on-hadoop-pig-and-spark/

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章