spark on yarn cluster模式,異常:no suitable driver

網上很多都說要配置–driver-class-path的,又說要把mysql驅動包放到spark默認的classpath下面

其實只需要配置–jars,然後設置下driver配置即可
在這裏插入圖片描述
然後執行腳本

spark2-submit \
--master yarn \
--deploy-mode cluster \
--class com.bigdata.PreWarningScalaAppV2 \
--jars /var/lib/hadoop-hdfs/converter-moshi-2.1.0.jar,/var/lib/hadoop-hdfs/fastjson-1.2.58.jar,/var/lib/hadoop-hdfs/guava-20.0.jar,/var/lib/hadoop-hdfs/influxdb-java-2.5.jar,file:/var/lib/hadoop-hdfs/kafka-clients-2.0.0.jar,file:/var/lib/hadoop-hdfs/logging-interceptor-3.5.0.jar,file:/var/lib/hadoop-hdfs/moshi-1.2.0.jar,file:/var/lib/hadoop-hdfs/okhttp-3.5.0.jar,file:/var/lib/hadoop-hdfs/okio-1.11.0.jar,file:/var/lib/hadoop-hdfs/retrofit-2.1.0.jar,file:/var/lib/hadoop-hdfs/spark-streaming-kafka-0-10_2.11-2.4.4.jar,file:/var/lib/hadoop-hdfs/mysql-connector-java-5.1.48.jar \
--conf "spark.driver.userClassPathFirst=true" \
/var/lib/hadoop-hdfs/prewarning-1.0.jar

然後結果就ok了

在這裏插入圖片描述
程序一直在跑,ok了
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章