spark 3.x idea linux遠程開發環境搭建

依賴包

jdk 8或11都行,不要到jdk 17

  jdk 17第一個問題是jdk內部類默認不允許反射,很多配置要改。

scala 2.13

  scala 2.13版本是爲scala 3.0版本準備的,改進挺多。可通過scala編程(第四版)學習。

hadoop 3.2.1

  因爲windows hadoop winutils當前最新僅僅到3.2.1,所以最好是相同版本,或者不要差太多。

  默認spark自帶hadoop是沒有native模式的,性能會比較差,所以需要裝個原裝的hadoop。

hive 3.1.3

  spark表雖然可以獨立存在,但是會報警告,所以hive也得安裝,運行spark on hive。

spark 3.3.4

  使用最新版本就可以了。

postgresql或mysql

  存儲hive metastore元數據,這裏以postgresql爲例。

zookeeper

  安裝3.5.9,因爲hbase和hdfs都需要依賴zookeeper選舉。

上面裝完之後,完整的大數據基建就起來了,spark sql、spark流式任務、spark圖、spark機器學習任務隨便跑,一套架子搞定所有的計算和ETL,快速擴容、快速調度。一般大數據還會包含hbase和flink,這倆相對獨立,可以不裝。

linux單機standalone搭建

  雖然說spark可以自己調度,但是如果使用了HDFS存儲的話,那麼最好就直接搞個單機yarn,而不是standalone。這樣更加接近生產模式。

  配置$SPARK_HOME/conf下的三個文件。

  spark-env.sh

export JAVA_HOME="/usr/local/jdk-11.0.21+9"
export SPARK_MASTER_HOST=192.168.231.128
export SPARK_LOCAL_IP=192.168.231.128
export SPARK_MASTER_PORT=7077
#history 配置歷史服務
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 -Dspark.history.retainedApplications=30 -Dspark.history.fs.logDirectory=/root/spark-3.3.4-bin-hadoop3-scala2.13/history"

  workers

192.168.231.128

  spark-defaults.conf 

spark.master                     spark://192.168.231.128:7077
spark.eventLog.enabled           true
spark.eventLog.dir               /root/spark-3.3.4-bin-hadoop3-scala2.13/logs

 

idea環境配置scala spark支持,支持通過spark-submit和idea直接提交測試

  1、安裝scala插件

  2、全局庫增加scala

   3、模塊設置將scala加入

   4、maven配置scala插件

  父工程pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.study.sparkmaven</groupId>
    <artifactId>sparkmaven</artifactId>
    <packaging>pom</packaging>
    <version>1.0-SNAPSHOT</version>
    <modules>
        <module>hello</module>
        <module>wordcount</module>
    </modules>

    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <!-- scala 的版本 -->
        <version.scala>2.13.9</version.scala>
        <version.maven.compile.plugin>3.9.0</version.maven.compile.plugin>
        <version.maven.scala.plugin>2.15.2</version.maven.scala.plugin>
        <version.maven.jar.plugin>3.2.2</version.maven.jar.plugin>
        <version.maven.dependency.plugin>3.2.0</version.maven.dependency.plugin>
        <version.maven.assembly.plug>3.3.0</version.maven.assembly.plug>
        <spark.version>3.2.0</spark.version>
    </properties>

    <build>
        <pluginManagement>
            <plugins>
                <!-- maven-compile-plugin 插件 -->
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <version>${version.maven.compile.plugin}</version>
                    <!-- 配置信息 -->
                    <configuration>
                        <!-- 源碼 -->
                        <source>${maven.compiler.source}</source>
                        <target>${maven.compiler.target}</target>
                        <!-- 編碼方式 -->
                        <encoding>UTF-8</encoding>
                        <!-- 支持調試 -->
                        <debug>true</debug>
                    </configuration>
                </plugin>
                <plugin>
                    <groupId>org.scala-tools</groupId>
                    <artifactId>maven-scala-plugin</artifactId>
                    <version>${version.maven.scala.plugin}</version>
                    <configuration>
                        <!-- scala的版本號 -->
                        <scalaVersion>${version.scala}</scalaVersion>
                    </configuration>
                    <!-- 配置監聽器 -->
                    <executions>
                        <!-- 監聽器 -->
                        <execution>
                            <!-- 如果有多個監聽器則必須設置id,而且不能重複 -->
                            <id>scala-compile</id>
                            <!-- 監聽的操作 -->
                            <phase>compile</phase>
                            <!-- 監聽器觸發後執行的操作 -->
                            <goals>
                                <goal>compile</goal>
                            </goals>
                        </execution>
                        <execution>
                            <id>scala-test-compile</id>
                            <phase>test-compile</phase>
                            <goals>
                                <goal>testCompile</goal>
                            </goals>
                        </execution>
                    </executions>
                </plugin>
                <!-- maven-assembly-plugin -->
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-assembly-plugin</artifactId>
                    <version>${version.maven.assembly.plug}</version>
                    <!-- 配置 -->
                    <configuration>
                        <!-- 啓動的功能 -->
                        <descriptorRefs>
                            <!-- 打jar包自動拷貝依賴 -->
                            <descriptorRef>jar-with-dependencies</descriptorRef>
                        </descriptorRefs>
                    </configuration>
                    <executions>
                        <!-- 監聽器 -->
                        <execution>
                            <id>make-assembly</id>
                            <!-- 監聽 打包命令 -->
                            <phase>package</phase>
                            <goals>
                                <!-- 目前只有這個操作 -->
                                <goal>single</goal>
                            </goals>
                        </execution>
                    </executions>
                </plugin>
            </plugins>
        </pluginManagement>
    </build>

    <dependencyManagement>
        <dependencies>
            <!-- scala 庫 -->
            <dependency>
                <groupId>org.scala-lang</groupId>
                <artifactId>scala-library</artifactId>
                <version>${version.scala}</version>
            </dependency>
            <!-- scala 編譯 -->
            <dependency>
                <groupId>org.scala-lang</groupId>
                <artifactId>scala-compiler</artifactId>
                <version>${version.scala}</version>
            </dependency>
            <!-- scala 映射 -->
            <dependency>
                <groupId>org.scala-lang</groupId>
                <artifactId>scala-reflect</artifactId>
                <version>${version.scala}</version>
            </dependency>
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-core_2.13</artifactId>
                <version>${spark.version}</version>
                <!-- idea運行需要註釋掉,打包需要放開 -->
                <!-- <scope>provided</scope>-->
            </dependency>
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-sql_2.13</artifactId>
                <version>${spark.version}</version>
            </dependency>
        </dependencies>
    </dependencyManagement>
</project>

  子工程pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>sparkmaven</artifactId>
        <groupId>com.study.sparkmaven</groupId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>

    <artifactId>wordcount</artifactId>

    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.13</artifactId>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.13</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <configuration>
                    <launchers>
                        <launcher>
                            <id>wordcount</id>
                            <mainClass>com.sparklearn.WordCount</mainClass>
                        </launcher>
                    </launchers>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <addClasspath>true</addClasspath>
                            <classpathPrefix>lib/</classpathPrefix>
                            <mainClass>com.sparklearn.WordCount</mainClass>
                        </manifest>
                    </archive>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

  到此爲止,開發、編譯、打包就沒有問題了。

本地運行(windows)

  如果打出來都是直接cp到linux上運行,或者使用遠程開發,則這一步是可選的。

  下載hadoop以及hadoop win包即可,記得下載相同版本,解壓win包,覆蓋%HADOOP_HOME%/bin,然後將覆蓋後的hadoop.dll拷貝到c:/windows/system32目錄下即可。然後idea直接運行如下即可:

package com.sparklearn

import org.apache.spark.{SparkConf, SparkContext}

object WordCount {

  def main(args: Array[String]): Unit = {
// 如果是通過spark-submit提交運行,則不需要設置setMaster(),設置了會代替命令行選項--master,導致結果不正確,所以最好通過命令行選項控制,windows環境則設置local,否則不要設置。
// 但是如果通過java -jar xxx.jar直接運行spark任務,則Master必須指定。 val conf
= new SparkConf().setMaster("local[*]").setAppName("wordcount").set("spark.testing.memory","2147480000") val sc = new SparkContext(conf) sc.textFile("D:\\hadoop-3.2.1\\data\\a.txt") .flatMap(_.split(" ")) .map(_->1) .reduceByKey(_+_) .collect() .foreach(println(_)) } }

  不同於spark-submit,本地模式不需要在啓動standalone,直接idea run即可調試。

  詳見https://blog.csdn.net/saranjiao/article/details/106082374。local都是代表本地運行,只有spark://xxx:xx纔是提交集羣運行。

  上述scala打出來的包也可以通過spark-submit運行,如spark-submit --master spark://127.0.0.1:7077 --class com.sparklearn.WordCount wordcount-1.0-SNAPSHOT-jar-with-dependencies.jar。也可以直接運行java -jar wordcount-1.0-SNAPSHOT-jar-with-dependencies.jar

  如果是通過java -jar xxx.jar直接運行,則要麼目標環境CLASSPATH包含了spark依賴的包如spark-core、spark-sql以及依賴的scala包等,否則會包類找不到。

  如果是通過spark-submit提交,則驅動也託管給了spark集羣或yarn集羣,所以打包的時候只要指定provided即可,包會非常小,當然帶着依賴也可以(不會報錯),只不過很大。

遠程調試 

  如果在windows端開發,在linux端運行,則必不可少的需要調試的情況。

spark運行流程及main函數

  調度是通過執行應用中的main函數來的,所以叫做任務,並非簡單的外掛jar,指定方法(星環TDH啓動快,大概率做法就是這裏給改了,不用每次啓動jvm),但是spark原生要求SparkSession必須在main內初始化、而且只有一個,所以這裏需要一個隔離機制。

   https://blog.51cto.com/u_16099276/7409034

  https://blog.csdn.net/weixin_42011858/article/details/129516511

  https://blog.csdn.net/luoyepiaoxue2014/article/details/128076590

終止正在運行的spark任務

  https://blog.csdn.net/wankunde/article/details/117813432

spark streaming流式計算

package com.sparklearn

import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.storage.StorageLevel

object NetworkWordCount {
  def main(args: Array[String]) {
    if (args.length < 2) {
      System.err.println("Usage: NetworkWordCount <hostname> <port>")
      System.exit(1)
    }

    // Create the context with a 1 second batch size
    val sparkConf = new SparkConf().setAppName("NetworkWordCount").setMaster("local[2]")
    val ssc = new StreamingContext(sparkConf, Seconds(1))
    ssc.sparkContext.setLogLevel("WARN")

    // Create a socket stream on target ip:port and count the
    // words in input stream of \n delimited text (eg. generated by 'nc')
    // Note that no duplication in storage level only for running locally.
    // Replication necessary in distributed scenario for fault tolerance.
    val lines = ssc.socketTextStream(args(0), args(1).toInt, StorageLevel.MEMORY_AND_DISK_SER)
    val words = lines.flatMap(_.split(" "))
    val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
    wordCounts.print()
    ssc.start()
    ssc.awaitTermination()
  }
}

  同樣,打包後提交到linux服務器運行。

  先啓動nc。

[lightdb@lightdb-dev ~]$ nc -lk 9999

  再提交任務。

[root@lightdb-dev ~]# spark-submit --master spark://127.0.0.1:7077 --class com.sparklearn.NetworkWordCount wordcount-1.0-SNAPSHOT-jar-with-dependencies.jar localhost 9999
24/01/03 12:35:41 INFO SparkContext: Running Spark version 3.3.4
24/01/03 12:35:41 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
24/01/03 12:35:41 INFO ResourceUtils: ==============================================================
24/01/03 12:35:41 INFO ResourceUtils: No custom resources configured for spark.driver.
24/01/03 12:35:41 INFO ResourceUtils: ==============================================================
24/01/03 12:35:41 INFO SparkContext: Submitted application: NetworkWordCount
24/01/03 12:35:41 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
24/01/03 12:35:41 INFO ResourceProfile: Limiting resource is cpu
....
24/01/03 12:35:43 INFO SingleEventLogFileWriter: Logging events to file:/root/spark-3.3.4-bin-hadoop3-scala2.13/logs/local-1704256542621.inprogress
-------------------------------------------
Time: 1704256544000 ms
-------------------------------------------

  nc交互中輸入隨便內容,如下:

[lightdb@lightdb-dev ~]$ nc -lk 9999
888
666
非 fe
s^C
[lightdb@lightdb-dev ~]$ 

  spark作業輸出:

-------------------------------------------
Time: 1704256626000 ms
-------------------------------------------
(888,1)

-------------------------------------------
Time: 1704256627000 ms
-------------------------------------------
(666,1)

-------------------------------------------
Time: 1704256628000 ms
-------------------------------------------

-------------------------------------------
Time: 1704256629000 ms
-------------------------------------------

-------------------------------------------
Time: 1704256630000 ms
-------------------------------------------

-------------------------------------------
Time: 1704256631000 ms
-------------------------------------------
(非,1)
(fe,1)

  優雅的停止spark streaming任務

spark sql

rdd和dataframes的區別和關係

spark與外部數據庫的交互

spark graphx

spark機器學習

典型錯誤

scalac: error while loading package, class file 'D:\scala-2.13.0\lib\scala-library.jar(scala/reflect/package.class)' is broken
(class java.lang.RuntimeException/error reading Scala signature of package.class: Scala signature package has wrong version
 expected: 5.0
 found: 5.2 in package.class)

安裝的scala是2.13,依賴也按照https://www.80wz.com/qawfw/880.html換成了2.13,仍然報上面的錯誤。

spark shell,spark submit也都沒問題了。spark standalone已經啓動,通過idea SparkContext提交:

package com.sparklearn

import org.apache.spark.{SparkConf, SparkContext}

object WordCount {

  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("spark://192.168.231.128:7077").setAppName("wordcount").set("spark.testing.memory","2147480000")
    val sc = new SparkContext(conf)
    sc.textFile("/root/spark-3.3.4-bin-hadoop3-scala2.13/data/a.txt")
      .flatMap(_.split(" "))
      .map(_->1)
      .reduceByKey(_+_)
      .collect()
      .foreach(println(_))
  }
}

報錯如下:

PS D:\sparkmaven\wordcount\target> java -jar .\wordcount-1.0-SNAPSHOT-jar-with-dependencies.jar
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
24/01/02 16:01:05 INFO SparkContext: Running Spark version 3.2.0
24/01/02 16:01:05 WARN Shell: Did not find winutils.exe: {}
java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
        at org.apache.hadoop.util.Shell.fileNotFoundException(Shell.java:548)
        at org.apache.hadoop.util.Shell.getHadoopHomeDir(Shell.java:569)
        at org.apache.hadoop.util.Shell.getQualifiedBin(Shell.java:592)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:689)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
        at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1691)
        at org.apache.hadoop.security.SecurityUtil.setConfigurationInternal(SecurityUtil.java:104)
        at org.apache.hadoop.security.SecurityUtil.<clinit>(SecurityUtil.java:88)
        at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:312)
        at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:300)
        at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:575)
        at org.apache.spark.util.Utils$.$anonfun$getCurrentUserName$1(Utils.scala:2510)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2510)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:314)
        at com.sparklearn.WordCount$.main(WordCount.scala:9)
        at com.sparklearn.WordCount.main(WordCount.scala)
Caused by: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
        at org.apache.hadoop.util.Shell.checkHadoopHomeInner(Shell.java:468)
        at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:439)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:516)
        ... 13 more
24/01/02 16:01:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
24/01/02 16:01:05 INFO ResourceUtils: ==============================================================
24/01/02 16:01:05 INFO ResourceUtils: No custom resources configured for spark.driver.
24/01/02 16:01:05 INFO ResourceUtils: ==============================================================
24/01/02 16:01:05 INFO SparkContext: Submitted application: wordcount
24/01/02 16:01:05 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, scri
pt: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
24/01/02 16:01:05 INFO ResourceProfile: Limiting resource is cpu
24/01/02 16:01:05 INFO ResourceProfileManager: Added ResourceProfile id: 0
24/01/02 16:01:05 INFO SecurityManager: Changing view acls to: zjhua
24/01/02 16:01:05 INFO SecurityManager: Changing modify acls to: zjhua
24/01/02 16:01:05 INFO SecurityManager: Changing view acls groups to:
24/01/02 16:01:05 INFO SecurityManager: Changing modify acls groups to:
24/01/02 16:01:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(zjhua); groups with view permissions: Set(); users  with m
odify permissions: Set(zjhua); groups with modify permissions: Set()
24/01/02 16:01:06 INFO Utils: Successfully started service 'sparkDriver' on port 58717.
24/01/02 16:01:06 INFO SparkEnv: Registering MapOutputTracker
24/01/02 16:01:06 INFO SparkEnv: Registering BlockManagerMaster
24/01/02 16:01:06 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
24/01/02 16:01:06 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
Exception in thread "main" java.lang.IllegalAccessError: class org.apache.spark.storage.StorageUtils$ (in unnamed module @0x2928854b) cannot access class sun.nio.ch.DirectBuffer (in module jav
a.base) because module java.base does not export sun.nio.ch to unnamed module @0x2928854b
        at org.apache.spark.storage.StorageUtils$.<init>(StorageUtils.scala:213)
        at org.apache.spark.storage.StorageUtils$.<clinit>(StorageUtils.scala)
        at org.apache.spark.storage.BlockManagerMasterEndpoint.<init>(BlockManagerMasterEndpoint.scala:110)
        at org.apache.spark.SparkEnv$.$anonfun$create$9(SparkEnv.scala:348)
        at org.apache.spark.SparkEnv$.registerOrLookupEndpoint$1(SparkEnv.scala:287)
        at org.apache.spark.SparkEnv$.create(SparkEnv.scala:336)
        at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:191)
        at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:460)
        at com.sparklearn.WordCount$.main(WordCount.scala:9)
        at com.sparklearn.WordCount.main(WordCount.scala)

  關於hadoop環境變量的配置,可以參考https://blog.csdn.net/lvoelife/article/details/133349627。對於linux下運行而言,這不是必須,但是如果要通過本地工程運行或者提交到linux運行,則需要配置winutils。所以,一種方式是直接idea設置遠程開發。

  does not export sun.nio.ch to unnamed module:需要在環境變量中設置 JAVA_OPT 和 JAVA_TOOL_OPTIONS,值統一爲 --add-exports=java.base/sun.nio.ch=ALL-UNNAMED。

服務器上直接spark-submit提交可以,遠程報錯(單機standalone模式),如下:

24/01/02 19:30:16 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://192.168.231.128:7077...
24/01/02 19:30:16 INFO TransportClientFactory: Successfully created connection to /192.168.231.128:7077 after 34 ms (0 ms spent in bootstraps)
24/01/02 19:30:35 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped
24/01/02 19:30:36 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://192.168.231.128:7077...
24/01/02 19:30:56 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://192.168.231.128:7077...
24/01/02 19:31:16 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
24/01/02 19:31:16 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
24/01/02 19:31:16 INFO SparkUI: Stopped Spark web UI at http://DESKTOP-UPH3K6S.mshome.net:4040
24/01/02 19:31:16 INFO StandaloneSchedulerBackend: Shutting down all executors
24/01/02 19:31:16 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
24/01/02 19:31:16 WARN StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
24/01/02 19:31:16 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
24/01/02 19:31:16 INFO MemoryStore: MemoryStore cleared
24/01/02 19:31:16 INFO BlockManager: BlockManager stopped
24/01/02 19:31:16 INFO BlockManagerMaster: BlockManagerMaster stopped
24/01/02 19:31:16 WARN MetricsSystem: Stopping a MetricsSystem that is not running
24/01/02 19:31:16 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
24/01/02 19:31:16 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 56604.
24/01/02 19:31:16 INFO NettyBlockTransferService: Server created on DESKTOP-UPH3K6S.mshome.net:56604
24/01/02 19:31:16 INFO SparkContext: Successfully stopped SparkContext
24/01/02 19:31:16 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
24/01/02 19:31:16 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, DESKTOP-UPH3K6S.mshome.net, 56604, None)
24/01/02 19:31:16 ERROR SparkContext: Error initializing SparkContext.
java.lang.NullPointerException: Cannot invoke "org.apache.spark.rpc.RpcEndpointRef.askSync(Object, scala.reflect.ClassTag)" because the return value of "org.apache.spark.storage.BlockManagerMa
ster.driverEndpoint()" is null
        at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:78)
        at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:518)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:596)
        at com.sparklearn.WordCount$.main(WordCount.scala:9)
        at com.sparklearn.WordCount.main(WordCount.scala)
24/01/02 19:31:16 INFO SparkContext: SparkContext already stopped.
Exception in thread "main" java.lang.NullPointerException: Cannot invoke "org.apache.spark.rpc.RpcEndpointRef.askSync(Object, scala.reflect.ClassTag)" because the return value of "org.apache.s
park.storage.BlockManagerMaster.driverEndpoint()" is null
        at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:78)
        at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:518)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:596)
        at com.sparklearn.WordCount$.main(WordCount.scala:9)
        at com.sparklearn.WordCount.main(WordCount.scala)
24/01/02 19:31:16 INFO ShutdownHookManager: Shutdown hook called
24/01/02 19:31:16 INFO ShutdownHookManager: Deleting directory D:\Temp\spark-5f86f6fc-7eeb-4939-b802-0f94466b812e

最後一個尚未找到原因,通過ssh將打出來的jar包傳遞到服務器,在服務端spark-submit提交。

spark 2和spark 3找不到org.apache.spark.Logging

  參見http://www.taodudu.cc/news/show-4644278.html?action=onClick和https://download.csdn.net/download/yewakui2253/10440634,直接用org.apache.spark.internal.Logging是不生效的。

  可以通過給job增加log4j,spark-submit命令行指定log4j或者全局修改log4j配置。如下:

    val ssc = new StreamingContext(sparkConf, Seconds(1))
    ssc.sparkContext.setLogLevel("WARN")

Spark任務提交jar包依賴管理

   除了fat jar,以及集羣環境包含了依賴庫、配置了CLASSPATH外,還可以通過幾種方式,參見https://blog.51cto.com/u_16087105/6223665。

各種警告處理

Spark-submit報錯:Failed to get database default, returning NoSuchObjectException

  用spark-sql運行sql的時候報錯Failed to get database default, returning NoSuchObjectException,因爲其中需要用到hive,但是我的spark裏面沒有帶hive,所以需要下載hive,然後給spark配置一下hive

  把hive配置的hive-site.xml (hive/conf/hive-site.xml)文件拷貝到 spark 目錄下即可(spark/conf/hive-site.xml)

  hive的安裝及hiveserver2配置hive元數據從derby改成postgresql

Hive On Spark 與 Spark On Hive 區別

Hive On Spark
在 Hive 中集成 Spark,Hive 既作爲元數據存儲,又負責解析 HQL 語句,只是將 Hive 的運行引擎更換爲 Spark,由 Spark 負責運算工作,而不再是默認的 MR 引擎,但部署較爲複雜。

Spark On Hive
Hive 只負責元數據存儲,由 Spark 來解析與執行 SQL 語句,其中的 SQL 語法爲 Spark SQL,且部署簡單。

Spark on Hive 的優點在於它提供了更靈活的編程接口,適用於各種數據處理需求,但性能可能不如 Hive on Spark,特別是在處理複雜查詢時。

Unable to load native-hadoop library for your platform

  參考https://blog.csdn.net/csdnliu123/article/details/105488895,下載獨立hadoop,然後配置LD_LIBRARY_PATH即可。

WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0

  參考https://stackoverflow.com/questions/41136946/hive-metastore-warnings-in-spark解決

瞭解spark的執行 spark web ui

  https://baijiahao.baidu.com/s?id=1754353885149130981&wfr=spider&for=pc

  通常利用spark web ui進行性能分析與優化 https://blog.csdn.net/Gavin_ke/article/details/130093634

spark表和hive表的區別

Spark 表和 Hive 表有以下區別:

技術棧不同: Spark 表使用 Spark 程序讀取和寫入,而 Hive 表使用 HiveQL 語句讀取和寫入。

存儲不同: Spark 表存儲在 Spark 內存中,而 Hive 表存儲在 Hadoop 的 HDFS 上。

處理速度不同: Spark 表可以通過分佈式計算和內存存儲提高處理速度,而 Hive 表處理速度較慢。

支持的數據源不同: Spark 表可以讀取多種數據源,包括 HDFS、Hive 表、關係型數據庫、NoSQL 數據庫等,而 Hive 表只能讀取 HDFS 上的數據。

You can set spark.sql.legacy.createHiveTableByDefault to false

24/01/05 10:27:37 WARN ResolveSessionCatalog: A Hive serde table will be created as there is no table provider specified. You can set spark.sql.legacy.createHiveTableByDefault to false so that native data source table will be created instead.

https://blog.51cto.com/u_16213399/7219855

spark 2到3是有較大的改進和不兼容性的,參見官方說明升級. https://spark.apache.org/docs/3.0.0-preview2/sql-migration-guide.html

hadoop hdfs啓動報權限問題

  啓動hdfs報錯:hadoop100 Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password) 

  創建ssh免密即可。

[root@lightdb-dev ~]# cd .ssh/
[root@lightdb-dev .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RMPAOE2aY4qKzB4qzR+DdpFy0iGOdPORAEIubBC/hHs root@lightdb-dev
The key's randomart image is:
+---[RSA 3072]----+
|=+.. =ooo        |
|=o  +o+...       |
|o=+o*+  .        |
|+*.*o+..         |
|+ E =.  S        |
|=. = .           |
|o*o +            |
|+.+. o           |
|o. ..            |
+----[SHA256]-----+
[root@lightdb-dev .ssh]# ssh lightdb-dev 
root@lightdb-dev's password: 
Activate the web console with: systemctl enable --now cockpit.socket

Last login: Fri Jan  5 14:37:30 2024
[root@lightdb-dev ~]# exit
註銷
Connection to lightdb-dev closed.
[root@lightdb-dev .ssh]# ssh lightdb-dev 
root@lightdb-dev's password: 

[root@lightdb-dev .ssh]# 
[root@lightdb-dev .ssh]# 
[root@lightdb-dev .ssh]# ssh-copy-id lightdb-dev 
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@lightdb-dev's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'lightdb-dev'"
and check to make sure that only the key(s) you wanted were added.

[root@lightdb-dev .ssh]# ssh lightdb-dev 
Activate the web console with: systemctl enable --now cockpit.socket

Last login: Fri Jan  5 14:40:02 2024 from 192.168.231.128

 

參考

https://blog.51cto.com/jiayq/5501230 idea配置

https://blog.csdn.net/weixin_42641909/article/details/96740257 spark配置Log4j的各種方式

https://blog.csdn.net/yiluohan0307/article/details/80048765 遠程調試

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章