[完]Spark安裝學習實踐

【參考】http://dblab.xmu.edu.cn/blog/804-2/

一、前提

  • 安裝Hadoop2.6.0以上;
  • 安裝JAVA JDK 1.7以上。

二、下載Spark

官方網站:http://spark.apache.org/downloads.html
1. 選擇版本:Spark 1.6.2
2. 選擇包類型:Pre-build with user-provided Hadoop [can use with most Hadoop distributions]
3. 選擇下載類型:Select Apache Mirror
4. 下載Spark:點擊接下來的鏈接,即可下載

三、安裝Spark

假設Spark下載到當前用戶的HOME目錄下。

# 解壓縮
sudo tar -zxf spark-1.6.2-bin-without-hadoop -C /usr/local/
cd /usr/local
sudo mv ./spark-1.6.2-bin-without-hadoop/ ./spark
# 修改權限
sudo chown -R hadoop:hadoop ./spark

配置Spark,修改配置文件spark-env.sh。

cd /usr/local/spark/conf
cp spark-env.sh.template spark-env.sh
vim spark-env.sh

添加配置信息。

export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)

配置完成,無需Hadoop那樣運行啓動命令,可直接使用。使用示例程序,驗證Spark是否安裝成功。

cd /usr/local/spark
bin/run-example SparkPi
# 2>&1,將所有信息都輸出到stdout中
bin/run-example SparkPi 2>&1 | grep "Pi is"

示例程序結果:

hadoop@ubuntu:/usr/local/spark$ bin/run-example SparkPi 2>&1 | grep "Pi is"
Pi is roughly 3.14576

四、使用Spark Shell編寫代碼

  • 啓動Spark Shell,會自動創建爲sc的spark context對象和名爲sqlContext的sql context對象。
cd /usr/local/spark
bin/spark-shell

運行spark shell後結果:

......
16/09/14 05:18:32 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
16/09/14 05:18:33 INFO repl.SparkILoop: Created sql context..
SQL context available as sqlContext.
scala> 
  • 加載text文件,spark創建sc,可加載本地文件和HDFS文件創建RDD。
scala> val textFile = sc.textFile("file:///usr/local/spark/README.md")
  • 簡單的RDD操作
# 獲取RDD文件textFile的第一行內容
scala> textFile.first()
# 獲取RDD文件textFile所有項的計數
scala> textFile.count()
# 抽取含有"Spark"的行,返回一個新的RDD
scala> val lineWithSpark = textFile.filter(line => line.contains("Spark"))
# 統計新的RDD的行數
scala> lineWithSpark.count()
# 通過組合RDD操作,實現簡易MapReduce操作
scala> textFile.map(line => line.split(" ").size).reduce((a, b) => if (a>b) a else b)
  • 退出Spark Shell,輸入exit,或者Ctrl+C,即可退出Spark Shell

五、Scala應用程序編程

  • Scala編寫的程序需要使用sbt進行編譯打包
  • Java程序使用Maven編譯打包
  • Python程序則通過spark-submit直接提交

5-1 安裝sbt

sbt是Spark用來對Scala程序進行打包的工具。

sudo mkdir /usr/local/sbt
sudo chown -R hadoop:hadoop /usr/local/sbt
cd /usr/local/sbt
cp ~/sbt-launch.jar .
# 創建sbt腳本
vim ./sbt
  • 腳本sbt中,添加下面內容:
#!/bin/bash
SBT_OPTS="-Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M"
java $SBT_OPTS -jar `dirname $0`/sbt-launch.jar "$@"
  • 爲腳本添加可執行權限
chmod u+x ./sbt
  • 檢查sbt是否可用,確保電腦處於聯網狀態,首次運行會出現“Getting org.scala-sbt sbt 0.13.11 …”的下載信息。
./sbt sbt-version

  出現如下結果,表示安裝成功

......
    [SUCCESSFUL ] org.fusesource.jansi#jansi;1.4!jansi.jar (6739ms)
:: retrieving :: org.scala-sbt#boot-scala
    confs: [default]
    5 artifacts copied, 0 already retrieved (24494kB/222ms)
[info] Set current project to sbt (in build file:/usr/local/sbt/)
[info] 0.13.11

5-2 Scala應用程序代碼

  • 創建一個文件夾 sparkapp 作爲應用程序根目錄,在目錄下創建一個名爲 SimpleApp.scala 的文件。
cd ~
mkdir sparkapp
mkdir -p ./sparkapp/src/main/scala      # 創建所需的文件夾結構
vim ./sparkapp/src/main/scala/SimpleApp.scala
  • 在SimpleApp.scala文件中,編寫Scala應用程序代碼
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object SimpleApp {
        def main(args:Array[String]) {
                val logFile = "file:///usr/local/spark/README.md"
                val conf = new SparkConf().setAppName("Simple Application")
                val sc = new SparkContext(conf)
                val logData = sc.textFile(logFile, 2).cache()
                val numAs = logData.filter(line => line.contains("a")).count()
                val numBs = logData.filter(line => line.contains("b")).count()
                println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
        }
}

  該程序用於計算/usr/local/spark/README.md中含有“a”的行數和含有“b”的行數。程序依賴於Spark API,需要使用sbt進行編譯打包。

  • ~/sparkapp中新建文件simple.sbt(vim ./sparkapp/simple.sbt),添加下面內容,聲明改程序的信息以及與Spark的依賴關係
name := "Simple Project"

version := "1.0"

scalaVersion := "2.10.5"

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.2"

  在上面的配置信息中,scalaVersion用來指定scala的版本,sparkcore用來指定spark的版本,這兩個版本信息都可以在之前的啓動 Spark shell 的過程中,從如下的屏幕的顯示信息中找到。

......
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.2
      /_/

Using Scala version 2.10.5 (OpenJDK Client VM, Java 1.7.0_111)
......

5-3 使用sbt打包Scala程序

  • 檢查應用程序的目錄結構
cd ~/sparkapp
find .

  文件結構應如下所示:

.
./simple.sbt
./src
./src/main
./src/main/scala
./src/main/scala/SimpleApp.scala
  • 將應用程序打包成JAR(首次運行需要下載依賴包),生成的JAR包位置爲~/sparkapp/target/scala-2.10/simple-project_2.10-1.0.jar
hadoop@ubuntu:~/sparkapp$ /usr/local/sbt/sbt package
......
[info] Packaging /home/hadoop/sparkapp/target/scala-2.10/simple-project_2.10-1.0.jar ...
[info] Done packaging.
[success] Total time: 7 s, completed Sep 17, 2016 11:31:28 PM
  • 通過spark-submit運行程序
# 顯示完整信息
hadoop@ubuntu:~/sparkapp$ /usr/local/spark/bin/spark-submit --class "SimpleApp" ~/sparkapp/target/scala-2.10/simple-project_2.10-1.0.jar 
16/09/17 23:50:00 INFO spark.SparkContext: Running Spark version 1.6.2
16/09/17 23:50:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
......
16/09/17 23:50:12 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
Lines with a: 58, Lines with b: 26
16/09/17 23:50:12 INFO spark.SparkContext: Invoking stop() from shutdown hook
......
16/09/17 23:50:13 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
# 顯示所需要的信息
hadoop@ubuntu:~/sparkapp$ /usr/local/spark/bin/spark-submit --class "SimpleApp" ~/sparkapp/target/scala-2.10/simple-project_2.10-1.0.jar 2>&1 | grep "Lines with a:"
Lines with a: 58, Lines with b: 26

六、Java獨立應用編程

6-1 安裝Maven

sudo unzip apache-maven-3.3.9-bin.zip -d /usr/local
cd /usr/local
sudo mv apache-maven-3.3.9/ maven
/usr/local$ sudo chown -R hadoop:hadoop maven/

6-2 Java應用程序代碼

  • 進入HOME目錄,創建相關目錄,建立SimpleApp.java文件
cd ~
mkdir -p sparkapp2/src/main/java
vim sparkapp2/src/main/java/Simple.java
  • SimpleApp.java文件中添加如下代碼:
import org.apache.spark.api.java.*;
import org.apache.spark.api.java.function.Function;

public class SimpleApp {
        public static void main(String[] args){
                String logFile = "file:///usr/local/spark/README.md";
                JavaSparkContext sc = new JavaSparkContext("local", "Simple App", "file:///usr/local/spark/", 
                    new String[]{"target/simple-project-1.0.jar"});
                JavaRDD<String> logData = sc.textFile(logFile).cache();
                long numAs = logData.filter(new Function<String, Boolean>() {
                        public Boolean call(String s) {
                                return s.contains("a");
                        }
                }).count();

                long numBs = logData.filter(new Function<String, Boolean>() {
                        public Boolean call(String s) {
                                return s.contains("b");
                        }
                }).count();

                System.out.println("Lines with a: " + numAs + ", Lines with b: " + numBs);
        }
}
  • 該程序依賴Spark Java API,需要通過Maven進行編譯打包。在./sparkapp2中新建文件pom.xml(vim ~/sparkapp2/pom.xml),添加下面內容,聲明該程序信息以及與Spark的依賴關係:
<project>
        <groupId>edu.berkeley</groupId>
        <artifactId>simple-project</artifactId>
        <modelVersion>4.0.0</modelVersion>
        <name>Simple Project</name>
        <packaging>jar</packaging>
        <version>1.0</version>
        <repositories>
                <repository>
                        <id>Akka repository</id>
                        <url>http://repo.akka.io/releases</url>
                </repository>
        </repositories>
        <dependencies>
                <dependency>
                        <groupId>org.apache.spark</groupId>
                        <artifactId>spark-core_2.11</artifactId>
                        <version>2.0.0-preview</version>
                </dependency>
        </dependencies>
</project>

6-3 使用maven打包java程序

  • 檢查應用程序文件結構
hadoop@ubuntu:~/sparkapp2$ find
.
./src
./src/main
./src/main/java
./src/main/java/Simple.java
./pom.xml
  • 將應用程序打包成JAR文件(首次運行需要下載依賴包,需要聯網,消耗一定的時間):
hadoop@ubuntu:~/sparkapp2$ /usr/local/maven/bin/mvn package
......
[INFO] Building jar: /home/hadoop/sparkapp2/target/simple-project-1.0.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 32.926 s
[INFO] Finished at: 2016-09-18T18:59:14-07:00
[INFO] Final Memory: 26M/63M
[INFO] ------------------------------------------------------------------------
  • 通過spark-submit運行程序
hadoop@ubuntu:~/sparkapp2$ /usr/local/spark/bin/spark-submit --class "SimpleApp" ~/sparkapp2/target/simple-project-1.0.jar
......
hadoop@ubuntu:~/sparkapp2$ /usr/local/spark/bin/spark-submit --class "SimpleApp" ~/sparkapp2/target/simple-project-1.0.jar 2>&1 | grep "Lines with a"
Lines with a: 58, Lines with b: 26
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章