使用IDEA 搭建 spark on yarn 的開發環境+調試~

1.導入yarn和hdfs配置文件

因爲spark on yarn 是依賴於yarn和hdfs的,所以獲取yarn和hdfs配置文件是首要條件,將core-site.xml、hdfs-site.xml
、yarn-site.xml 這三個文本考入到你IDEA項目裏面的resource目錄下,如下圖所示:

這裏寫圖片描述

2.添加項目依賴

除了pom裏面你要添加的:

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-common</artifactId>
    <!--<scope>test</scope>-->
            <version>2.7.3</version>
</dependency>

<dependency>
   <groupId>org.apache.hadoop</groupId>
   <artifactId>hadoop-hdfs</artifactId>
   <!--<scope>test</scope>-->
   <version>2.7.3</version>
</dependency>

<dependency>
   <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.11</artifactId>
    <!--<scope>test</scope>-->
    <version>2.2.1</version>
</dependency>

<dependency>
   <groupId>org.apache.spark</groupId>
    <artifactId>spark-sql_2.11</artifactId>
   <!--<scope>test</scope>-->
   <version>2.2.1</version>
</dependency>

<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.34</version>
</dependency>

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-hive_2.11</artifactId>
    <version>2.2.1</version>
</dependency>

而且還要添加spark-yarn的依賴包到你的dependencies 方法如下所示:
這裏寫圖片描述

這裏寫圖片描述

這裏寫圖片描述

這裏是拋磚引玉,只是說我缺少spark-yarn_2.11-2.2.1.jar,具體缺什麼你們自己加就好了,對了,spark依賴的包都在${SPARK_HOME}/jars目錄下,缺什麼自己去找即可,實在不行,你就來個*全部加載進去,很暴力,很好用。

如果你不加,你會報出如下錯誤:

Caused by: org.apache.spark.SparkException: Unable to load YARN support
    at org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:413)
    at org.apache.spark.deploy.SparkHadoopUtil$.yarn$lzycompute(SparkHadoopUtil.scala:408)
    at org.apache.spark.deploy.SparkHadoopUtil$.yarn(SparkHadoopUtil.scala:408)
    at org.apache.spark.deploy.SparkHadoopUtil$.get(SparkHadoopUtil.scala:433)
    at org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:2381)
    at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:156)
    at org.apache.spark.SparkEnv$.create(SparkEnv.scala:351)
    at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:175)
    at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:257)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:432)
    at com.timanetworks.spark.faw.CommonStaticConst$.loadHdfsConfig(CommonStaticConst.scala:37)
    at com.timanetworks.spark.faw.CommonStaticConst$.<init>(CommonStaticConst.scala:23)
    at com.timanetworks.spark.faw.CommonStaticConst$.<clinit>(CommonStaticConst.scala)
    ... 3 more
Caused by: java.lang.ClassNotFoundException: org.apache.spark.deploy.yarn.YarnSparkHadoopUtil
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.spark.util.Utils$.classForName(Utils.scala:230)
    at org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:409)

3。修改core-site的如下配置:

在core-site.xml中註釋掉如下配置:
這裏寫圖片描述

說白了就是註釋掉這個:

<property>
  <name>net.topology.script.file.name</name>
  <value>/etc/hadoop/conf/topology_script.py</value>
</property>

你如果想試着把這個py腳本從linux下拷貝下來,然後改成windows路徑,很負責的告訴你,我試過。。。不得行!

<property>
   <name>net.topology.script.file.name</name>
    <value>D:\spark\spark-2.2.1-bin-hadoop2.7\topology_script.py</value>
</property>

這樣是不行的,一定要註釋掉!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

不然你會發現,這個錯誤:

1.  java.io.IOException: Cannot run program "/etc/hadoop/conf/topology_script.py" (in directory "D:\workspace\fawmc-new44\operation-report-calc"): CreateProcess error=2, 系統找不到指定的文件。
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:520)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:251)
        at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:188)
        at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
        at org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:101)
        at org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:81)
        at org.apache.spark.scheduler.cluster.YarnScheduler.getRackForHost(YarnScheduler.scala:37)
        at org.apache.spark.scheduler.TaskSetManager$$anonfun$addPendingTask$1.apply(TaskSetManager.scala:225)
		at org.apache.spark.scheduler.TaskSetManager$$anonfun$addPendingTask$1.apply(TaskSetManager.scala:206)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.TaskSetManager.addPendingTask(TaskSetManager.scala:206)
        at org.apache.spark.scheduler.TaskSetManager$$anonfun$1.apply$mcVI$sp(TaskSetManager.scala:178)
		at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:166)
		at org.apache.spark.scheduler.TaskSetManager.<init>(TaskSetManager.scala:177)
		at org.apache.spark.scheduler.TaskSchedulerImpl.createTaskSetManager(TaskSchedulerImpl.scala:229)
		at org.apache.spark.scheduler.TaskSchedulerImpl.submitTasks(TaskSchedulerImpl.scala:193)
		at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1055)
		at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:930)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:874)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1695)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1687)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1676)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    Caused by: java.io.IOException: CreateProcess error=2, 系統找不到指定的文件。
        at java.lang.ProcessImpl.create(Native Method)
        at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
        at java.lang.ProcessImpl.start(ProcessImpl.java:137)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
        ... 26 more

或者這個錯誤:

"D:\spark\spark-2.2.1-bin-hadoop2.7\topology_script.py" (in directory "D:\workspace\fawmc-new44\operation-report-calc"): CreateProcess error=193, %1 不是有效的 Win32 應用程序。

好了,完成了上述步驟,就可以用IDEA調試spark on yarn了,美滋滋,對了,一般都是用yarn-client做調試哦~

成果圖!

這裏寫圖片描述

好啦,就是這樣,如果你們還遇到了hdfs什麼權限問題,不能創建,讀寫啊什麼之類的,網上有很多資料,總結下來無非就是hadoop fs -chmod/chown這倆命令,然後用法和linux類似。
如果你們遇到了windows下的某些權限問題,可以看我的另一片博文:

windows下搭建hadoop/spark環境常見問題
https://blog.csdn.net/qq_31806205/article/details/79819724

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章