hadoop 3.x大數據集羣搭建系列10-配置Spark Shell及Hive on Spark 一. Spark shell配置 二. Hive on Spark配置

一. Spark shell配置

Spark shell默認就是可以訪問的

spark-shell
spark.sql("select count(*) from test.t2").show()

二. Hive on Spark配置

2.1 問題描述

set hive.execution.engine=mr;
select count(*) from test.t2;
set hive.execution.engine=spark;
select count(*) from test.t2;

報錯:

FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.

2.2 解決方案

在hive中創建spark配置文件

cd /home/apache-hive-3.1.3-bin/conf
vim spark-defaults.conf 
添加如下內容(在執行任務時,會根據如下參數執行)
spark.master                               yarn
spark.eventLog.enabled                   true
spark.eventLog.dir                        hdfs://hp5:8020/spark-history
spark.executor.memory                    1g
spark.driver.memory                    1g
vim /home/apache-hive-3.1.3-bin/conf/hive-site.xml
<!--Spark依賴位置(注意:端口號9000必須和namenode的端口號一致)-->
<property>
    <name>spark.yarn.jars</name>
    <value>hdfs://hp5:8020/spark-jars/*</value>
</property>
  
<!--Hive執行引擎-->
<property>
    <name>hive.execution.engine</name>
    <value>spark</value>
</property>

拷貝spark jar包到到hive的lib目錄:

cd /home/spark-3.2.2-bin-hadoop3.2/jars
cp ./scala-library-2.12.15.jar /home/apache-hive-3.1.3-bin/lib/
cp ./spark-core_2.12-3.2.2.jar /home/apache-hive-3.1.3-bin/lib/
cp ./spark-network-common_2.12-3.2.2.jar /home/apache-hive-3.1.3-bin/lib/

Spark和Hive的新版本不兼容:
需要編譯安裝
https://blog.csdn.net/rfdjds/article/details/125389450

和我這邊看到的報錯一樣,報的的

java.lang.NoSuchMethodError: org.apache.spark.api.java.JavaSparkContext.accumulator(Ljava/lang/Object;Ljava/lang/String;Lorg/apache/spark/AccumulatorParam;)Lorg/apache/spark/Accumulator;
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章