大家都知道在spark1.3版本後,kafkautil裏面提供了兩個創建dstream的方法,一個是老版本中有的createStream方法,還有一個是後面新加的createDirectStream方法。關於這兩個方法的優缺點,官方已經說的很詳細(http://spark.apache.org/docs/latest/streaming-kafka-integration.html),總之就是createDirectStream性能會更好一點,通過新方法創建出來的dstream的rdd partition和kafka的topic的partition是一一對應的,通過低階API直接從kafka的topic消費消息,但是它不再往zookeeper中更新consumer offsets,使得基於zk的consumer offsets的監控工具都會失效。
官方只是蜻蜓點水般的說了一下可以在foreachRDD中更新zookeeper上的offsets:
- directKafkaStream.foreachRDD { rdd =>
- val offsetRanges = rdd.asInstanceOf[HasOffsetRanges]
- // offsetRanges.length = # of Kafka partitions being consumed
- ...
- }
對應Exactly-once semantics要自己去實現了,大致的實現思路就是在driver啓動的時候先從zk上獲得consumer offsets信息,createDirectStream有兩個重載方法,其中一個可以設置從任意offsets位置開始消費,部分代碼如下:
- def createDirectStream(implicit streamingConfig: StreamingConfig, kc: KafkaCluster) = {
- val extractors = streamingConfig.getExtractors()
- //從zookeeper上讀取offset開始消費message
- val messages = {
- val kafkaPartitionsE = kc.getPartitions(streamingConfig.topicSet)
- if (kafkaPartitionsE.isLeft) throw new SparkException("get kafka partition failed:")
- val kafkaPartitions = kafkaPartitionsE.right.get
- val consumerOffsetsE = kc.getConsumerOffsets(streamingConfig.group, kafkaPartitions)
- if (consumerOffsetsE.isLeft) throw new SparkException("get kafka consumer offsets failed:")
- val consumerOffsets = consumerOffsetsE.right.get
- consumerOffsets.foreach {
- case (tp, n) => println("===================================" + tp.topic + "," + tp.partition + "," + n)
- }
- KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder, (String, String)](
- ssc, kafkaParams, consumerOffsets, (mmd: MessageAndMetadata[String, String]) => (mmd.key, mmd.message))
- }
- messages
- }
這裏會有幾個問題,就是在一個group是新的consumer group時,即首次消費,zk上海沒有相應的group offsets目錄,這時要先初始化一下zk上的offsets目錄,或者是zk上記錄的offsets已經過時,由於kafka有定時清理策略,直接從zk上的offsets開始消費會報ArrayOutofRange異常,即找不到offsets所屬的index文件了,針對這兩種情況,做了以下處理:
- def setOrUpdateOffsets(implicit streamingConfig: StreamingConfig, kc: KafkaCluster): Unit = {
- streamingConfig.topicSet.foreach(topic => {
- println("current topic:" + topic)
- var hasConsumed = true
- val kafkaPartitionsE = kc.getPartitions(Set(topic))
- if (kafkaPartitionsE.isLeft) throw new SparkException("get kafka partition failed:")
- val kafkaPartitions = kafkaPartitionsE.right.get
- val consumerOffsetsE = kc.getConsumerOffsets(streamingConfig.group, kafkaPartitions)
- if (consumerOffsetsE.isLeft) hasConsumed = false
- if (hasConsumed) {
- //如果有消費過,有兩種可能,如果streaming程序執行的時候出現kafka.common.OffsetOutOfRangeException,說明zk上保存的offsets已經過時了,即kafka的定時清理策略已經將包含該offsets的文件刪除。
- //針對這種情況,只要判斷一下zk上的consumerOffsets和leaderEarliestOffsets的大小,如果consumerOffsets比leaderEarliestOffsets還小的話,說明是過時的offsets,這時把leaderEarliestOffsets更新爲consumerOffsets
- val leaderEarliestOffsets = kc.getEarliestLeaderOffsets(kafkaPartitions).right.get
- println(leaderEarliestOffsets)
- val consumerOffsets = consumerOffsetsE.right.get
- val flag = consumerOffsets.forall {
- case (tp, n) => n < leaderEarliestOffsets(tp).offset
- }
- if (flag) {
- println("consumer group:" + streamingConfig.group + " offsets已經過時,更新爲leaderEarliestOffsets")
- val offsets = leaderEarliestOffsets.map {
- case (tp, offset) => (tp, offset.offset)
- }
- kc.setConsumerOffsets(streamingConfig.group, offsets)
- }
- else {
- println("consumer group:" + streamingConfig.group + " offsets正常,無需更新")
- }
- }
- else {
- //如果沒有被消費過,則從最新的offset開始消費。
- val leaderLatestOffsets = kc.getLatestLeaderOffsets(kafkaPartitions).right.get
- println(leaderLatestOffsets)
- println("consumer group:" + streamingConfig.group + " 還未消費過,更新爲leaderLatestOffsets")
- val offsets = leaderLatestOffsets.map {
- case (tp, offset) => (tp, offset.offset)
- }
- kc.setConsumerOffsets(streamingConfig.group, offsets)
- }
- })
- }
OK,driver啓動的問題解決了,那麼接下來處理處理完消息後更新zk offsets的工作,這裏要注意是在處理完之後再更新,想想如果你消費了消息先更新zk offset在去處理消息將處理好的消息保存到其他地方去,如果後一步由於處理消息的代碼有BUG失敗了,前一步已經更新了zk了,會導致這部分消息雖然被消費了但是沒被處理,等你把處理消息的BUG修復再重新提交後,這部分消息在下次啓動的時候不會再被消費了,因爲你已經更新了ZK OFFSETS,針對這些因素考慮,部分代碼實現如下:
- def updateZKOffsets(rdd: RDD[(String, String)])(implicit streamingConfig: StreamingConfig, kc: KafkaCluster): Unit = {
- println("rdd not empty,update zk offset")
- val offsetsList = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
- for (offsets <- offsetsList) {
- val topicAndPartition = TopicAndPartition(offsets.topic, offsets.partition)
- val o = kc.setConsumerOffsets(streamingConfig.group, Map((topicAndPartition, offsets.untilOffset)))
- if (o.isLeft) {
- println(s"Error updating the offset to Kafka cluster: ${o.left.get}")
- }
- }
- }
- def processData(messages: InputDStream[(String, String)])(implicit streamingConfig: StreamingConfig, kc: KafkaCluster): Unit = {
- messages.foreachRDD(rdd => {
- if (!rdd.isEmpty()) {
- val datamodelRDD = streamingConfig.relation match {
- case "1" =>
- val (topic, _) = streamingConfig.topic_table_mapping
- val extractor = streamingConfig.getExtractor(topic)
- // Create direct kafka stream with brokers and topics
- val topicsSet = Set(topic)
- val datamodel = rdd.filter(msg => {
- extractor.filter(msg)
- }).map(msg => extractor.msgToRow(msg))
- datamodel
- case "2" =>
- val (topics, _) = streamingConfig.topic_table_mapping
- val extractors = streamingConfig.getExtractors(topics)
- val topicsSet = topics.split(",").toSet
- //kafka msg爲key-value形式,key用來對msg進行分區用的,爲了散列存儲消息,採集器那邊key採用的是:topic|加一個隨機數的形式,例如:rd_e_pal|20,split by |取0可以拿到對應的topic名字,這樣union在一起的消息可以區分出來自哪一個topic
- val datamodel = rdd.filter(msg => {
- //kafka msg爲key-value形式,key用來對msg進行分區用的,爲了散列存儲消息,採集器那邊key採用的是:topic|加一個隨機數的形式,例如:rd_e_pal|20,split by |取0可以拿到對應的topic名字,這樣union在一起的消息可以區分出來自哪一個topic
- val keyValid = msg != null && msg._1 != null && msg._1.split("\\|").length == 2
- if (keyValid) {
- val topic = msg._1.split("\\|")(0)
- val (_, extractor) = extractors.find(p => {
- p._1.equalsIgnoreCase(topic)
- }).getOrElse(throw new RuntimeException("配置文件中沒有找到topic:" + topic + " 對應的extractor"))
- //trim去掉末尾的換行符,否則取最後一個字段時會有一個\n
- extractor.filter(msg._2.trim)
- }
- else {
- false
- }
- }).map {
- case (key, msgContent) =>
- val topic = key.split("\\|")(0)
- val (_, extractor) = extractors.find(p => {
- p._1.equalsIgnoreCase(topic)
- }).getOrElse(throw new RuntimeException("配置文件中沒有找到topic:" + topic + " 對應的extractor"))
- extractor.msgToRow((key, msgContent))
- }
- datamodel
- }
- //先處理消息
- processRDD(datamodelRDD)
- //再更新offsets
- updateZKOffsets(rdd)
- }
- })
- }
- def processRDD(rdd: RDD[Row])(implicit streamingConfig: StreamingConfig) = {
- if (streamingConfig.targetType == "mongo") {
- val target = streamingConfig.getTarget().asInstanceOf[MongoTarget]
- if (!MongoDBClient.db.collectionExists(target.collection)) {
- println("create collection:" + target.collection)
- MongoDBClient.db.createCollection(target.collection, MongoDBObject("storageEngine" -> MongoDBObject("wiredTiger" -> MongoDBObject())))
- val coll = MongoDBClient.db(target.collection)
- //創建ttl index
- if (target.ttlIndex) {
- val indexs = coll.getIndexInfo
- if (indexs.find(p => p.get("name") == "ttlIndex") == None) {
- coll.createIndex(MongoDBObject(target.ttlColumn -> 1), MongoDBObject("expireAfterSeconds" -> target.ttlExpire, "name" -> "ttlIndex"))
- }
- }
- }
- }
- val (_, table) = streamingConfig.topic_table_mapping
- val schema = streamingConfig.getTableSchema(table)
- // Get the singleton instance of SQLContext
- val sqlContext = HIVEContextSingleton.getInstance(rdd.sparkContext)
- // Convert RDD[String] to RDD[case class] to DataFrame
- val dataFrame = sqlContext.createDataFrame(rdd, schema)
- // Register as table
- dataFrame.registerTempTable(table)
- // Do word count on table using SQL and print it
- val results = sqlContext.sql(streamingConfig.sql)
- //select dt,hh(vtm) as hr,app_key, collect_set(device_id) as deviceids from rd_e_app_header where dt=20150401 and hh(vtm)='01' group by dt,hh(vtm),app_key limit 100 ;
- // results.show()
- streamingConfig.targetType match {
- case "mongo" => saveToMongo(results)
- case "show" => results.show()
- }
- }
- def saveToMongo(df: DataFrame)(implicit streamingConfig: StreamingConfig) = {
- val target = streamingConfig.getTarget().asInstanceOf[MongoTarget]
- val coll = MongoDBClient.db(target.collection)
- val result = df.collect()
- if (result.size > 0) {
- val bulkWrite = coll.initializeUnorderedBulkOperation
- result.foreach(row => {
- val id = row(target.pkIndex)
- val setFields = target.columns.filter(p => p.op == "set").map(f => (f.name, row(f.index))).toArray
- val incFields = target.columns.filter(p => p.op == "inc").map(f => {
- (f.name, row(f.index).asInstanceOf[Long])
- }).toArray
- // obj=obj.++($addToSet(MongoDBObject("test"->MongoDBObject("$each"->Array(3,4)),"test1"->MongoDBObject("$each"->Array(1,2)))))
- var obj = MongoDBObject()
- var addToSetObj = MongoDBObject()
- target.columns.filter(p => p.op == "addToSet").foreach(col => {
- col.mType match {
- case "Int" =>
- addToSetObj = addToSetObj.++(col.name -> MongoDBObject("$each" -> row(col.index).asInstanceOf[ArrayBuffer[Int]]))
- case "Long" =>
- addToSetObj = addToSetObj.++(col.name -> MongoDBObject("$each" -> row(col.index).asInstanceOf[ArrayBuffer[Long]]))
- case "String" =>
- addToSetObj = addToSetObj.++(col.name -> MongoDBObject("$each" -> row(col.index).asInstanceOf[ArrayBuffer[String]]))
- }
- })
- if (addToSetObj.size > 0) obj = obj.++($addToSet(addToSetObj))
- if (incFields.size > 0) obj = obj.++($inc(incFields: _*))
- if (setFields.size > 0) obj = obj.++($set(setFields: _*))
- bulkWrite.find(MongoDBObject("_id" -> id)).upsert().updateOne(obj)
- })
- bulkWrite.execute()
- }
- }
仔細想一想,還是沒有實現精確一次的語義,寫入mongo和更新ZK由於不是一個事務的,如果更新mongo成功,然後更新ZK失敗,則下次啓動的時候這個批次的數據就被重複計算,對於UV由於是addToSet去重操作,沒什麼影響,但是PV是inc操作就會多算這一個批次的的數據,其實如果batch time比較短的話,其實都還是可以接受的。
原文地址:http://blog.csdn.net/xiao_jun_0820/article/details/46911775