基於IntelliJ Idea的Scala開發demo一Spark-SQL開發demo

真正在做數據分析和數據處理的時候,spark-sql還是用得比較的,本文主要給出基於Scala的spark-sql開發demo,本文已經假設IntelliJ Idea上的Scala開發環境,包括SBT已經安裝配置完成,如果讀者還沒有這些準備的話,可以參考我前面的關於IntelliJ Idea上Scala的spark開發環境搭建相關文章。

本文主要是通過spark-sql來操作Hive數據庫,旨在幫助讀者完成本地spark-sql的開發環境。

主要步驟如下:

  1. 導入相關依賴包
  2. 編寫spark-sql相關代碼
  3. 打包遠程運行

導入相關依賴包

如下圖所示:
在這裏插入圖片描述

編寫spark-sql相關代碼

本文的demo主要是通過spark-sql來查看Hive數據倉庫,主要完成步驟如下:

創建“LzSparkSqlTest.scala“類
import org.apache.log4j.Logger
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession

class LzSparkSqlTest {

  val LOGGER = Logger.getLogger(this.getClass)

  /*
   獲取SparkSession實例
   */
  def getOrCreateSparkSession(): SparkSession ={

    val conf = new SparkConf().setAppName("TestSparkSqlOnHive")
      .setMaster("local")

    LOGGER.info("--------準備獲取SparkSession對象-----------")
    val sparkSession = SparkSession.builder().enableHiveSupport().config(conf).getOrCreate()
    LOGGER.info("--------準備獲取SparkSession對象-----------")

    sparkSession

  }

  /*
   顯示Hive倉庫有哪些數據庫
   */
  def showHiveDatabases(): Unit ={

    LOGGER.info("--------開始打印數據庫列表如下-----------")
    getOrCreateSparkSession().sql("show databases").collect().foreach(println)
    LOGGER.info("--------數據庫列表打印完畢-----------")

  }

  /*
  打印某個數據庫下面所有的表
   */
  def displayTablesOnDB(database: String): Unit ={

    val useDatabaseSql = "use ".concat(database)
    val sparkSession = getOrCreateSparkSession()

    LOGGER.info("--------開始切換數據庫如下-----------")
    sparkSession.sql(useDatabaseSql)
    LOGGER.info("--------數據庫切換完畢-----------")

    LOGGER.info("--------打印當前數據庫下所有表-----------")
    sparkSession.sql("show tables").collect().foreach(println)
    LOGGER.info("--------當前數據庫下所有表打印完畢-----------")

  }

}

創建“LzScalaSparkTest.scala“類
import org.apache.log4j.Logger

object LzScalaSparkTest {

  // 當前class的日誌公共對象
  private final val LOGGER = Logger.getLogger(this.getClass)

  /* 這是我的第一個 Scala 程序
    * 以下程序將輸出'Hello World!'
    */
  def main(args: Array[String]) {

    println("Hello, world!") // 輸出 Hello World

    LOGGER.info("-------開始查看當前Hive倉庫的情況----------")
    val lzSparkSqlTest = new LzSparkSqlTest()

    lzSparkSqlTest.showHiveDatabases()
    lzSparkSqlTest.displayTablesOnDB("edw")
    LOGGER.info("-------結束查看當前Hive倉庫的情況----------")

  }

}

打包遠程運行

打包(關於如何打包,可以參考我前面的文章《基於IntelliJ Idea的Scala開發demo—SBT包管理demo》),本次打包如下圖所示:
在這裏插入圖片描述
打包完了之後,上傳遠程服務器,具體如何配置遠程上傳,可以參考我前面的文章《基於IntelliJ Idea的Scala開發環境搭建一遠程上傳以及遠程集羣調試》,上傳遠程之後如下圖所示:
在這裏插入圖片描述

遠程運行調試

這裏以master local[]的模式運行,運行命令如下圖所示:

spark2-submit --class LzScalaSparkTest lzscalasparktest_2.11-0.1.jar

運行結果如下圖所示:
在這裏插入圖片描述
在這裏插入圖片描述

中間出現的故障

Master local模式運行,執行如下命令:

spark2-submit --class LzScalaSparkTest lzscalasparktest_2.11-0.1.jar

可能會出現如下異常:

19/11/14 15:33:57 INFO optimizer.AuthorizerExtension: Creating ranger policy cache directory at /opt/IdeaProjects/LzScalaSparkTest/target/scala-2.11/./policycache Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.sql.internal.SharedState.externalCatalog()Lorg/apache/spark/sql/catalyst/catalog/ExternalCatalog; at org.apache.spark.sql.catalyst.optimizer.Authorizable$class.apply(Authorizable.scala:53) at org.apache.spark.sql.catalyst.optimizer.AuthorizerExtension.apply(AuthorizerExtension.scala:29) at org.apache.spark.sql.catalyst.optimizer.AuthorizerExtension.apply(AuthorizerExtension.scala:29) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76) at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:66) at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:66) at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72) at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68) at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77) at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3359) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at LzSparkSqlTest.showHiveDatabases(LzSparkSqlTest.scala:31) at LzScalaSparkTest$.main(LzScalaSparkTest.scala:15) at LzScalaSparkTest.main(LzScalaSparkTest.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 19/11/14 15:33:57 INFO spark.SparkContext: Invoking stop() from shutdown hook

解決方案:去掉CDH的spark-sql的權限驗證,主要是如下配置需要去掉:

spark.sql.extensions=org.apache.ranger.authorization.spark.authorizer.RangerSparkSQLExtension

master yarn模式運行:

spark2-submit --class LzScalaSparkTest --master yarn lzscalasparktest_2.11-0.1.jar

可能會出現如下異常:

Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Exception in thread "main" org.apache.spark.SparkException: When running with master 'yarn' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment. at org.apache.spark.deploy.SparkSubmitArguments.error(SparkSubmitArguments.scala:657) at org.apache.spark.deploy.SparkSubmitArguments.validateSubmitArguments(SparkSubmitArguments.scala:290) at org.apache.spark.deploy.SparkSubmitArguments.validateArguments(SparkSubmitArguments.scala:251) at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:120) at org.apache.spark.deploy.SparkSubmit$$anon$2$$anon$1.<init>(SparkSubmit.scala:911) at org.apache.spark.deploy.SparkSubmit$$anon$2.parseArguments(SparkSubmit.scala:911) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:81) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

解決方案:

需要設置兩個環境變量:HADOOP_CONF_DIR or YARN_CONF_DIR

發佈了74 篇原創文章 · 獲贊 41 · 訪問量 8萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章