Spark -- StructuredStreaming第三章 與其他技術整合 kafka 生產數據寫入MySQL表

整合Kafka

官網介紹

http://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html

●Creating a Kafka Source for Streaming Queries

// Subscribe to 1 topic
val df = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
  .option("subscribe", "topic1")
  .load()
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
  .as[(String, String)]
// Subscribe to multiple topics(多個topic)
val df = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
  .option("subscribe", "topic1,topic2")
  .load()
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
  .as[(String, String)]
// Subscribe to a pattern(訂閱通配符topic)
val df = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
  .option("subscribePattern", "topic.*")
  .load()
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
  .as[(String, String)]

Creating a Kafka Source for Batch Queries(kafka批處理查詢)

// Subscribe to 1 topic 
//defaults to the earliest and latest offsets(默認爲最早和最新偏移)
val df = spark
  .read
  .format("kafka")
  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
  .option("subscribe", "topic1")
  .load()df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
  .as[(String, String)]
// Subscribe to multiple topics, (多個topic)
//specifying explicit Kafka offsets(指定明確的偏移量)
val df = spark
  .read
  .format("kafka")
  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
  .option("subscribe", "topic1,topic2")
  .option("startingOffsets", """{"topic1":{"0":23,"1":-2},"topic2":{"0":-2}}""")
  .option("endingOffsets", """{"topic1":{"0":50,"1":-1},"topic2":{"0":-1}}""")
  .load()df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
  .as[(String, String)]
// Subscribe to a pattern, (訂閱通配符topic)at the earliest and latest offsets
val df = spark
  .read
  .format("kafka")
  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
  .option("subscribePattern", "topic.*")
  .option("startingOffsets", "earliest")
  .option("endingOffsets", "latest")
  .load()df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
  .as[(String, String)]

●注意:讀取後的數據的Schema是固定的,包含的列如下:

Column

Type

說明

key

binary

消息的key

value

binary

消息的value

topic

string

主題

partition

int

分區

offset

long

偏移量

timestamp

long

時間戳

timestampType

int

類型

 

●注意:下面的參數是不能被設置的,否則kafka會拋出異常:

  • group.id:kafka的source會在每次query的時候自定創建唯一的group id
  • auto.offset.reset :爲了避免每次手動設置startingoffsets的值,structured streaming在內部消費時會自動管理offset。這樣就能保證訂閱動態的topic時不會丟失數據。startingOffsets在流處理時,只會作用於第一次啓動時,之後的處理都會自動的讀取保存的offset。
  • key.deserializer,value.deserializer,key.serializer,value.serializer 序列化與反序列化,都是ByteArraySerializer
  • enable.auto.commit:Kafka源不支持提交任何偏移量

上代碼演示!!!

package cn.itcast.structedstreaming

import org.apache.spark.SparkContext
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.sql.{DataFrame, Dataset, Row, SparkSession}

object KafkaStructuredStreamingDemo {
  def main(args: Array[String]): Unit = {
    //1.創建SparkSession
    val spark: SparkSession = 
SparkSession.builder().master("local[*]").appName("SparkSQL").getOrCreate()
    val sc: SparkContext = spark.sparkContext
    sc.setLogLevel("WARN")
    import spark.implicits._
    //2.連接Kafka消費數據
    val dataDF: DataFrame = spark.readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "node01:9092")
      .option("subscribe", "spark_kafka")
      .load()
    //3.處理數據
    //注意:StructuredStreaming整合Kafka獲取到的數據都是字節類型,所以需要按照官網要求,
//轉成自己的實際類型
    val dataDS: Dataset[String] = dataDF.selectExpr("CAST(value AS STRING)").as[String]
    val wordDS: Dataset[String] = dataDS.flatMap(_.split(" "))
    val result: Dataset[Row] = wordDS.groupBy("value").count().sort($"count".desc)
    result.writeStream
      .format("console")
      .outputMode("complete")
      .trigger(Trigger.ProcessingTime(0))
      .option("truncate",false)//超過長度的列不截斷顯示,即完全顯示
      .start()
      .awaitTermination()
  }
}

 

 

整合MySQL

簡介

●需求

我們開發中經常需要將流的運算結果輸出到外部數據庫,例如MySQL中,但是比較遺憾Structured Streaming API不支持外部數據庫作爲接收器

如果將來加入支持的話,它的API將會非常的簡單比如:

format("jdbc").option("url","jdbc:mysql://...").start()

但是目前我們只能自己自定義一個JdbcSink,繼承ForeachWriter並實現其方法

 

上代碼演示!!!

package cn.itcast.structedstreaming

import java.sql.{Connection, DriverManager, PreparedStatement}

import org.apache.spark.SparkContext
import org.apache.spark.sql._
import org.apache.spark.sql.streaming.Trigger


object JDBCSinkDemo {
  def main(args: Array[String]): Unit = {
    //1.創建SparkSession
    val spark: SparkSession = 
SparkSession.builder().master("local[*]").appName("SparkSQL").getOrCreate()
    val sc: SparkContext = spark.sparkContext
    sc.setLogLevel("WARN")
    import spark.implicits._
    //2.連接Kafka消費數據
    val dataDF: DataFrame = spark.readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "node01:9092")
      .option("subscribe", "spark_kafka")
      .load()
    //3.處理數據
    //注意:StructuredStreaming整合Kafka獲取到的數據都是字節類型,所以需要按照官網要求,轉成自己的實際類型
    val dataDS: Dataset[String] = dataDF.selectExpr("CAST(value AS STRING)").as[String]
    val wordDS: Dataset[String] = dataDS.flatMap(_.split(" "))
    val result: Dataset[Row] = wordDS.groupBy("value").count().sort($"count".desc)
    val writer = new JDBCSink("jdbc:mysql://localhost:3306/bigdata?characterEncoding=UTF-8", "root", "root")
    result.writeStream
      .foreach(writer)
      .outputMode("complete")
      .trigger(Trigger.ProcessingTime(0))
      .start()
      .awaitTermination()
  }

  class JDBCSink(url:String,username:String,password:String) extends ForeachWriter[Row] with Serializable{
    var connection:Connection = _ //_表示佔位符,後面會給變量賦值
    var preparedStatement: PreparedStatement = _
    //開啓連接
    override def open(partitionId: Long, version: Long): Boolean = {
      connection = DriverManager.getConnection(url, username, password)
      true
    }

    /*
    CREATE TABLE `t_word` (
        `id` int(11) NOT NULL AUTO_INCREMENT,
        `word` varchar(255) NOT NULL,
        `count` int(11) DEFAULT NULL,
        PRIMARY KEY (`id`),
        UNIQUE KEY `word` (`word`)
      ) ENGINE=InnoDB AUTO_INCREMENT=26 DEFAULT CHARSET=utf8;
     */
    //replace INTO `bigdata`.`t_word` (`id`, `word`, `count`) VALUES (NULL, NULL, NULL);
    //處理數據--存到MySQL
    override def process(row: Row): Unit = {
      val word: String = row.get(0).toString
      val count: String = row.get(1).toString
      println(word+":"+count)
      //REPLACE INTO:表示如果表中沒有數據這插入,如果有數據則替換
      //注意:REPLACE INTO要求表有主鍵或唯一索引
      val sql = "REPLACE INTO `t_word` (`id`, `word`, `count`) VALUES (NULL, ?, ?);"
      preparedStatement = connection.prepareStatement(sql)
      preparedStatement.setString(1,word)
      preparedStatement.setInt(2,Integer.parseInt(count))
      preparedStatement.executeUpdate()
    }

    //關閉資源
    override def close(errorOrNull: Throwable): Unit = {
      if (connection != null){
        connection.close()
      }
      if(preparedStatement != null){
        preparedStatement.close()
      }
    }
  }
}

 

 

 

 

 

 

 

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章