解决spark.rdd.MapPartitionsRDD cannot be cast to streaming.kafka010.HasOffsetRange问题

最近在做sparkstreaming测试的时候,自己出了一个小问题,记录下.
贴部分代码:

package com.ybs.screen.test.data

import java.lang
import java.util.Properties

import com.ybs.screen.constant.Constants
import com.ybs.screen.model.{ProperModel, UnitInfo}
import com.ybs.screen.utils.PropertiesUtil
import org.apache.kafka.clients.consumer.{Consumer, ConsumerRecord, KafkaConsumer}
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.sql.SparkSession
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.dstream.{DStream, InputDStream}
import org.apache.spark.streaming.kafka010._
import org.apache.spark.{SparkConf, SparkContext, TaskContext}
import org.elasticsearch.spark.streaming.EsSparkStreaming

import scala.collection.JavaConverters._

object DemoTest {

  def main(args: Array[String]): Unit = {

    val sparkConf: SparkConf = new SparkConf().setAppName(this.getClass.getSimpleName).setMaster("local[*]")

    val sparkSession: SparkSession = PropertiesUtil.getSparkSessionTest(sparkConf)
    sparkSession.sparkContext.setLogLevel("WARN")

    val ssc: StreamingContext = new StreamingContext(sparkSession.sparkContext,Seconds(10))

    //kafka集群和topic
    val kafkaBrokers:String = ProperModel.getString(Constants.KAFKA_METADATA_BROKER_LIST)
    val kafkaTopics: String =ProperModel.getString( Constants.KAFKA_TOPICS)

    val kafkaParam = Map(
      "bootstrap.servers" -> kafkaBrokers,
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "group.id" -> "group4",
      //
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (false: lang.Boolean)
    )

    //    ssc.checkpoint("./streaming_checkpoint")
    //从kafka获取数据
    val inputDStream: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String](
      ssc,
      LocationStrategies.PreferConsistent,
      ConsumerStrategies.Subscribe[String, String](Set(kafkaTopics), kafkaParam, getLastOffsets(kafkaParam,Set(kafkaTopics))))

    val value: DStream[String] = inputDStream.map(x => x.value())

    EsSparkStreaming.saveToEs(value, "test/doc")

    value.foreachRDD(rdd =>{
      val offsetRanges: Array[OffsetRange] = rdd.asInstanceOf[HasOffsetRanges].offsetRanges

      inputDStream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
    })

    ssc.start()
    ssc.awaitTermination()

  }
	
}

在保存offset的时候,报了一个错误spark.rdd.MapPartitionsRDD cannot be cast to streaming.kafka010.HasOffsetRange保存offset报错
在网上查了一下发现只有从kafka拿到的inputDStream,才能转换为kafkaRDD. 后面做其他操作的时候会把kafkaRDD转换为非kafkaRDD,这时候就会报错,贴一下源码

private[spark] class KafkaRDD[K, V](
    sc: SparkContext,
    val kafkaParams: ju.Map[String, Object],
    val offsetRanges: Array[OffsetRange],
    val preferredHosts: ju.Map[TopicPartition, String],
    useConsumerCache: Boolean
) extends RDD[ConsumerRecord[K, V]](sc, Nil) with Logging with HasOffsetRanges

//只有KafkaRDD才可以转换成OffsetRange
//且只有通过InputDStream所得到的第一手数据才包含KafkaRDD

知道了原因以后,解决起来就简单了,可以在获得的inputDStream里面操作,获取偏移量,将存往elasticsearch的操作放在inputDStream里面, 或者在获取到inputDStream的时候先保存offset.然后再操作,这里我采取了笨一点的方法,在拿到inputDStream的时候就直接先存了offset.再进行其他操作

inputDStream.foreachRDD(rdd =>{
      val offsetRanges: Array[OffsetRange] = rdd.asInstanceOf[HasOffsetRanges].offsetRanges

      inputDStream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
    })

参考:spark.rdd.MapPartitionsRDD cannot be cast to streaming.kafka010.HasOffsetRange

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章