Kafka 集成Spark

1.環境準備

1.Kafka集羣環境準備

1.準備一個Kafka集羣環境並啓動

Kafka 3.6.1 集羣安裝與部署

2.創建first Topic

/usr/kafka/kafka_2.13-3.6.1/bin/kafka-topics.sh --bootstrap-server 192.168.58.130:9092 --create --partitions 1 --replication-factor 3 --topic first

2.Spark環境準備

1.配置Scala運行環境

1.下載Scala

https://www.scala-lang.org/

2.配置運行環境【略】

如果不是經常使用,可不做配置

3.IDE 安裝Scala插件【略】

2.準備一個基礎運行項目

1.創建一個 maven 項目 spark-kafka【略】
2.在模塊設置中添加Scala依賴

image

image

image

image

3.在 main 下創建 scala 文件夾,並配置爲源代碼根目錄,在 scala 下創建包名爲 cn.coreqi.spark

image

4.添加POM依賴
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-kafka-0-10_2.13</artifactId>
            <version>3.5.0</version>
        </dependency>
5.在 resources 下添加日誌配置文件log4j.properties
log4j.rootLogger=error, stdout,R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %5p --- [%50t] %-80c(line:%5L) : %m%n
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=../log/agent.log
log4j.appender.R.MaxFileSize=1024KB
log4j.appender.R.MaxBackupIndex=1
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %5p --- [%50t] %-80c(line:%6L) : %m%n

2.Spark 生產者

1.新建 scala Object:SparkKafkaProducer

package cn.coreqi.spark.producer

import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, ProducerRecord}
import org.apache.kafka.common.serialization.StringSerializer

import java.util.Properties

object SparkKafkaProducer {
  def main(args: Array[String]): Unit = {
    // 0 kafka 配置信息
    val properties = new Properties()
    properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.58.130:9092,192.168.58.131:9092,192.168.58.132:9092")
    properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer])
    properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer])
    // 1 創建 kafka 生產者
    var producer = new KafkaProducer[String, String](properties)
    // 2 發送數據
    for (i <- 1 to 5) {
      producer.send(new ProducerRecord[String, String]("first", "coreqi" + i))
    }
    // 3 關閉資源
    producer.close()
  }
}

2.啓動 Kafka 消費者

/usr/kafka/kafka_2.13-3.6.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.58.130:9092 --topic first

3.執行 SparkKafkaProducer 程序,觀察 kafka 消費者控制檯情況

3.Spark 消費者

1.調整POM依賴,添加一些依賴信息

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-kafka-0-10_2.13</artifactId>
            <version>3.5.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.13</artifactId>
            <version>3.5.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.13</artifactId>
            <version>3.5.0</version>
        </dependency>

2.新建 scala Object:SparkKafkaConsumer

package cn.coreqi.spark.consumer

import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.{DStream, InputDStream}
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Seconds, StreamingContext}

object SparkKafkaConsumer {
  def main(args: Array[String]): Unit = {
    //1.創建 SparkConf
    val sparkConf: SparkConf = new SparkConf().setAppName("sparkstreaming").setMaster("local[*]")
    //2.創建 StreamingContext
    val ssc = new StreamingContext(sparkConf, Seconds(3))
    //3.定義 Kafka 參數:kafka 集羣地址、消費者組名稱、key 序列化、value 序列化
    val kafkaPara: Map[String, Object] = Map[String, Object](
      ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG -> "192.168.58.130:9092,192.168.58.131:9092,192.168.58.132:9092",
      ConsumerConfig.GROUP_ID_CONFIG -> "coreqiGroup",
      ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer],
      ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer]
    )
    //4.讀取 Kafka 數據創建 DStream
    val kafkaDStream: InputDStream[ConsumerRecord[String, String]] =
      KafkaUtils.createDirectStream[String, String](
        ssc,
        LocationStrategies.PreferConsistent, //優先位置
        ConsumerStrategies.Subscribe[String, String](Set("first"), kafkaPara) // 消費策略:(訂閱多個主題,配置參數)
      )
    //5.將每條消息的 KV 取出
    val valueDStream: DStream[String] = kafkaDStream.map(record => record.value())
    //6.計算 WordCount
    valueDStream.print()
    //7.開啓任務
    ssc.start()
    ssc.awaitTermination()
  }
}

3.啓動 SparkKafkaConsumer 消費者

4.啓動 kafka 生產者

/usr/kafka/kafka_2.13-3.6.1/bin/kafka-console-producer.sh --bootstrap-server 192.168.58.130:9092 --topic first

5.觀察 IDEA 控制檯數據打印

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章