1、開發第一個Spark程序
1)創建一個Spark Context
2)加載數據
3)把每一行分割成單詞
4)轉換成pairs並且計數
2、wordCount程序
import org.apache.spark.{SparkConf, SparkContext}
/**
* @author
* @date 2020-05-11 20:19
* @version 1.0
*/
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("wordcount")
val sc= new SparkContext(conf)
val input=sc.textFile("/Users/navyliu/Downloads/spark/helloSpark.txt")
val lines = input.flatMap(line=>line.split(" "))
val count = lines.map(word=>(word,1)).reduceByKey{case (x,y)=>x+y}
val output = count.saveAsTextFile("/Users/navyliu/Downloads/spark/helloSparkRes")
}
3、打包
配置jar包
build
然後build
4、啓動集羣
1)啓動master
./sbin/start-master.sh
master啓動之後,可以訪問下面地址:
http://localhost:8080/
2)啓動worker
./bin/spark-class
啓動work要用到這個地址:
spark://navydeMacBook-Pro.local:7077
啓動work的命令爲:
./bin/spark-class org.apache.spark.deploy.worker.Worker spark://navydeMacBook-Pro.local:7077
查看進程
也可以在頁面中看到worker進程
3)提交作業
./bin/spark-submit
提交命令
./bin/spark-submit --master spark://navydeMacBook-Pro.local:7077 --class CountWord untitled.jar
把生成的jar包拷到當前目錄
提交作業的命令爲:
./spark-2.4.5-bin-hadoop2.7/bin/spark-submit --master spark://navydeMacBook-Pro.local:7077 --class CountWord untitled.jar
結果爲:
可以看到生成了文件目錄helloSparkRes
jobs任務截圖:
5、遇到的問題
如果報這種錯誤:
dyld: lazy symbol binding failed: Symbol not found: ____chkstk_darwin
Referenced from: /private/var/folders/91/6g1y3wp163jbr7tgdrqgkjg80000gn/T/liblz4-java-8731604412047028366.dylib
Expected in: /usr/lib/libSystem.B.dylib
dyld: Symbol not found: ____chkstk_darwin
Referenced from: /private/var/folders/91/6g1y3wp163jbr7tgdrqgkjg80000gn/T/liblz4-java-8731604412047028366.dylib
Expected in: /usr/lib/libSystem.B.dylib
原因是mac版本與spark版本衝突,重新下載spark,版本是2.4.5(spark-2.4.5-bin-hadoop2.7),對應的scala是scala-2.12.11.
重新運行即可.
如果遇到什麼問題,請添加微信公衆號:架構師Plus,諮詢