異常出現
調用的命令如下:
spark2-submit \
--master yarn \
--deploy-mode cluster \
--class com.bigdata.PreWarningScalaAppV2 \
--jars /var/lib/hadoop-hdfs/converter-moshi-2.1.0.jar,/var/lib/hadoop-hdfs/fastjson-1.2.58.jar,/var/lib/hadoop-hdfs/guava-20.0.jar,/var/lib/hadoop-hdfs/influxdb-java-2.5.jar,file:/var/lib/hadoop-hdfs/kafka-clients-2.0.0.jar,file:/var/lib/hadoop-hdfs/logging-interceptor-3.5.0.jar,file:/var/lib/hadoop-hdfs/moshi-1.2.0.jar,file:/var/lib/hadoop-hdfs/okhttp-3.5.0.jar,file:/var/lib/hadoop-hdfs/okio-1.11.0.jar,file:/var/lib/hadoop-hdfs/retrofit-2.1.0.jar,file:/var/lib/hadoop-hdfs/spark-streaming-kafka-0-10_2.11-2.4.4.jar,file:/var/lib/hadoop-hdfs/mysql-connector-java-5.1.48.jar \
/var/lib/hadoop-hdfs/prewarning-1.0.jar
查看日誌,異常如下:
一般這種情況是版本衝突導致的
版本查看
我們的版本是20.0
而系統環境自帶的guava版本是11.0
說明用的是自帶的11.0版本,低版本里可能沒有這個新的類Stopwatch
所以我們要讓系統用我們的jar包
解決問題
添加如下的配置
--conf "spark.driver.userClassPathFirst=true" \
官網的說明:是否使用用戶添加的jars
最後執行命令
spark2-submit \
--master yarn \
--deploy-mode cluster \
--class com.bigdata.PreWarningScalaAppV2 \
--jars /var/lib/hadoop-hdfs/converter-moshi-2.1.0.jar,/var/lib/hadoop-hdfs/fastjson-1.2.58.jar,/var/lib/hadoop-hdfs/guava-20.0.jar,/var/lib/hadoop-hdfs/influxdb-java-2.5.jar,file:/var/lib/hadoop-hdfs/kafka-clients-2.0.0.jar,file:/var/lib/hadoop-hdfs/logging-interceptor-3.5.0.jar,file:/var/lib/hadoop-hdfs/moshi-1.2.0.jar,file:/var/lib/hadoop-hdfs/okhttp-3.5.0.jar,file:/var/lib/hadoop-hdfs/okio-1.11.0.jar,file:/var/lib/hadoop-hdfs/retrofit-2.1.0.jar,file:/var/lib/hadoop-hdfs/spark-streaming-kafka-0-10_2.11-2.4.4.jar,file:/var/lib/hadoop-hdfs/mysql-connector-java-5.1.48.jar \
--conf "spark.driver.userClassPathFirst=true" \
/var/lib/hadoop-hdfs/prewarning-1.0.jar
效果如下,運行ok:
CDH也顯示一直運行了