Spark學習(拾壹)- Spark Streaming整合Flume


官方文檔
http://spark.apache.org/docs/2.2.0/streaming-flume-integration.html

Push方式整合之概述

在集羣中選擇一臺這樣的機器

  • 啓動Flume + Spark流媒體應用程序時,必須在該機器上運行其中一個Spark處理程序。
  • Flume 可以配置爲將數據推送到機器上的端口。

由於採用了推送模型,streaming應用程序需要先啓動,接收端被調度並監聽所選端口,以便Flume能夠推送數據。

Push方式整合之Flume Agent配置開發

Flume Agent的編寫: flume_push_streaming.conf

simple-agent.sources = netcat-source
simple-agent.sinks = avro-sink
simple-agent.channels = memory-channel

simple-agent.sources.netcat-source.type = netcat
simple-agent.sources.netcat-source.bind = hadoop000
simple-agent.sources.netcat-source.port = 44444

simple-agent.sinks.avro-sink.type = avro
simple-agent.sinks.avro-sink.hostname = hadoop000
simple-agent.sinks.avro-sink.port = 41414

simple-agent.channels.memory-channel.type = memory

simple-agent.sources.netcat-source.channels = memory-channel
simple-agent.sinks.avro-sink.channel = memory-channel

Push方式整合之Spark Streaming應用開發

添加

<!-- Spark Streaming整合Flume 依賴-->
 <dependency>
     <groupId>org.apache.spark</groupId>
     <artifactId>spark-streaming-flume_2.11</artifactId>
     <version>${spark.version}</version>
 </dependency>
import org.apache.spark.SparkConf
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

/**
  * Spark Streaming整合Flume的第一種方式
  */
object FlumePushWordCount {

  def main(args: Array[String]): Unit = {

    if(args.length != 2) {
      System.err.println("Usage: FlumePushWordCount <hostname> <port>")
      System.exit(1)
    }

    val Array(hostname, port) = args

    val sparkConf = new SparkConf() //.setMaster("local[2]").setAppName("FlumePushWordCount")
    val ssc = new StreamingContext(sparkConf, Seconds(5))

    //TODO... 如何使用SparkStreaming整合Flume
    val flumeStream = FlumeUtils.createStream(ssc, hostname, port.toInt)

    flumeStream.map(x=> new String(x.event.getBody.array()).trim)
      .flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()

    ssc.start()
    ssc.awaitTermination()
  }
}

Push方式整合之本地IDEA環境聯調

注意此時flume中的simple-agent.sinks.avro-sink.hostname一行應該爲

simple-agent.sinks.avro-sink.hostname = 192.168.199.203(IDEA所在機器的ip)

1、運行本地IEDA代碼
加上參數本地ip和41414端口
在這裏插入圖片描述
2、運行flume

flume-ng agent  \
--name simple-agent   \
--conf $FLUME_HOME/conf    \
--conf-file $FLUME_HOME/conf/flume_push_streaming.conf  \
-Dflume.root.logger=INFO,console

3、通過在hadoop000機器上telnet輸入數據,觀察本地IDEA控制檯的輸出
在這裏插入圖片描述
本地測試總結
1)啓動sparkstreaming作業
2) 啓動flume agent
3) 通過telnet輸入數據,觀察IDEA控制檯的輸出

Push方式整合之服務器環境聯調

1、使用mvn clean package -DskipTests將本地代碼打包上傳到生產服務器
2、使用spark-submit提交
注意:–packages org.apache.spark:spark-streaming-flume_2.11:2.2.0將使用的jar包包含進來
–packages是需要在網上下載東西的。

spark-submit \
--class com.imooc.spark.FlumePushWordCount \
--master local[2] \
--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 \
/home/hadoop/lib/sparktrain-1.0.jar \
hadoop000 41414

3、啓動fulme
注意:注意此時flume中的simple-agent.sinks.avro-sink.hostname一行應該爲

simple-agent.sinks.avro-sink.hostname = hadoop000

測試步驟和本地IEDA方式一樣。

Pull方式整合之概述(推薦)

與直接將數據推送到Sparkstreaming不同,這種方法運行一個自定義的flume,允許以下操作。

  • 水槽將數據推入水槽,數據保持緩衝狀態。
  • Sparkstreaming使用一個可靠的flume接收器和事務從接收器提取數據。只有在通過Sparkstreaming接收和複製數據之後,事務纔會成功。

這確保了比前一種方法更強的可靠性和容錯保證。但是,這需要配置Flume來運行自定義接收器。

Flume Agent的編寫: flume_pull_streaming.conf

simple-agent.sources = netcat-source
simple-agent.sinks = spark-sink
simple-agent.channels = memory-channel

simple-agent.sources.netcat-source.type = netcat
simple-agent.sources.netcat-source.bind = hadoop000
simple-agent.sources.netcat-source.port = 44444

simple-agent.sinks.spark-sink.type = org.apache.spark.streaming.flume.sink.SparkSink
simple-agent.sinks.spark-sink.hostname = hadoop000
simple-agent.sinks.spark-sink.port = 41414

simple-agent.channels.memory-channel.type = memory

simple-agent.sources.netcat-source.channels = memory-channel
simple-agent.sinks.spark-sink.channel = memory-channel

Pull方式整合之Spark Streaming應用開發

pom文件依賴

 <dependency>
     <groupId>org.apache.spark</groupId>
     <artifactId>spark-streaming-flume-sink_2.11</artifactId>
     <version>${spark.version}</version>
 </dependency>
 
 <dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.5</version>
</dependency>
import org.apache.spark.SparkConf
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

/**
  * Spark Streaming整合Flume的第二種方式
  */
object FlumePullWordCount {

  def main(args: Array[String]): Unit = {

    if(args.length != 2) {
      System.err.println("Usage: FlumePullWordCount <hostname> <port>")
      System.exit(1)
    }

    val Array(hostname, port) = args

    val sparkConf = new SparkConf() //.setMaster("local[2]").setAppName("FlumePullWordCount")
    val ssc = new StreamingContext(sparkConf, Seconds(5))

    //TODO... 如何使用SparkStreaming整合Flume
    val flumeStream = FlumeUtils.createPollingStream(ssc, hostname, port.toInt)

    flumeStream.map(x=> new String(x.event.getBody.array()).trim)
      .flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()

    ssc.start()
    ssc.awaitTermination()
  }
}

Pull方式整合之本地IDEA環境聯調

注意點:先啓動flume 後啓動Spark Streaming應用程序
1、啓動flume

flume-ng agent  \
--name simple-agent   \
--conf $FLUME_HOME/conf    \
--conf-file $FLUME_HOME/conf/flume_pull_streaming.conf  \
-Dflume.root.logger=INFO,console

2、啓動IEDA的應用程序
在這裏插入圖片描述
3、測試輸入數據
在這裏插入圖片描述
4、查看IDEA控制檯會有結果輸出

Pull方式整合之服務器環境聯調

1、打包IDEA應用程序(方式和push一樣)
2、啓動flume
3、提交到spark上運行

spark-submit \
--class com.imooc.spark.FlumePullWordCount \
--master local[2] \
--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 \
/home/hadoop/lib/sparktrain-1.0.jar \
hadoop000 41414

4、使用telnet輸入測試數據觀察spark-submit的輸出

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章