Spark Streaming + Kafka direct 從Zookeeper中恢復offset

在上一遍《將 Spark Streaming + Kafka direct 的 offset 保存進入Zookeeper》中,我們已經成功的將 topic 的 partition 的 offset 保存到了 Zookeeper中,使監控工具發揮了其監控效果。那現在是時候來處理《“Spark Streaming + Kafka direct + checkpoints + 代碼改變” 引發的問題》中提到的問題了。


解決的方法是:分別從Kafka中獲得某個Topic當前每個partition的offset,再從Zookeeper中獲得某個consumer消費當前Topic中每個partition的offset,最後再這兩個根據項目情況進行合併,就可以了。

一、具體實現

1、程序實現,如下:

public class SparkStreamingOnKafkaDirect{

    public static JavaStreamingContext createContext(){

        SparkConf conf = new SparkConf().setMaster("local[4]").setAppName("SparkStreamingOnKafkaDirect");

        JavaStreamingContext jsc = new JavaStreamingContext(conf, Durations.seconds(30));
        jsc.checkpoint("/checkpoint");

        Map<String, String> kafkaParams = new HashMap<String, String>();
        kafkaParams.put("metadata.broker.list","192.168.1.151:1234,192.168.1.151:1235,192.168.1.151:1236");

        Map<TopicAndPartition, Long> topicOffsets = getTopicOffsets("192.168.1.151:1234,192.168.1.151:1235,192.168.1.151:1236", "kafka_direct");

        Map<TopicAndPartition, Long> consumerOffsets = getConsumerOffsets("192.168.1.151:2181", "spark-group", "kafka_direct");
        if(null!=consumerOffsets && consumerOffsets.size()>0){
            topicOffsets.putAll(consumerOffsets);
        }

//        for(Map.Entry<TopicAndPartition, Long> item:topicOffsets.entrySet()){
//            item.setValue(0l);
//        }

      for(Map.Entry<TopicAndPartition,Long> entry:topicOffsets.entrySet()){
          System.out.println(entry.getKey().topic()+"\t"+entry.getKey().partition()+"\t"+entry.getValue());
      }

        JavaInputDStream<String> lines = KafkaUtils.createDirectStream(jsc,
                String.class, String.class, StringDecoder.class,
                StringDecoder.class, String.class, kafkaParams,
                topicOffsets, new Function<MessageAndMetadata<String,String>,String>() {

                    public String call(MessageAndMetadata<String, String> v1)
                            throws Exception {
                        return v1.message();
                    }
                });



        final AtomicReference<OffsetRange[]> offsetRanges = new AtomicReference<>();

        JavaDStream<String> words = lines.transform(
                new Function<JavaRDD<String>, JavaRDD<String>>() {
                    @Override
                    public JavaRDD<String> call(JavaRDD<String> rdd) throws Exception {
                      OffsetRange[] offsets = ((HasOffsetRanges) rdd.rdd()).offsetRanges();
                      offsetRanges.set(offsets);
                      return rdd;
                    }
                  }
                ).flatMap(new FlatMapFunction<String, String>() {
                    public Iterable<String> call(
                           String event)
                            throws Exception {
                        return Arrays.asList(event);
                    }
                });

        JavaPairDStream<String, Integer> pairs = words
                .mapToPair(new PairFunction<String, String, Integer>() {

                    public Tuple2<String, Integer> call(
                            String word) throws Exception {
                        return new Tuple2<String, Integer>(
                                word, 1);
                    }
                });

        JavaPairDStream<String, Integer> wordsCount = pairs
                .reduceByKey(new Function2<Integer, Integer, Integer>() {
                    public Integer call(Integer v1, Integer v2)
                            throws Exception {
                        return v1 + v2;
                    }
                });

        lines.foreachRDD(new VoidFunction<JavaRDD<String>>(){
            @Override
            public void call(JavaRDD<String> t) throws Exception {

                ObjectMapper objectMapper = new ObjectMapper();

                CuratorFramework  curatorFramework = CuratorFrameworkFactory.builder()
                        .connectString("192.168.1.151:2181").connectionTimeoutMs(1000)
                        .sessionTimeoutMs(10000).retryPolicy(new RetryUntilElapsed(1000, 1000)).build();

                curatorFramework.start();

                for (OffsetRange offsetRange : offsetRanges.get()) {
                    final byte[] offsetBytes = objectMapper.writeValueAsBytes(offsetRange.untilOffset());
                    String nodePath = "/consumers/spark-group/offsets/" + offsetRange.topic()+ "/" + offsetRange.partition();
                    if(curatorFramework.checkExists().forPath(nodePath)!=null){
                            curatorFramework.setData().forPath(nodePath,offsetBytes);
                        }else{
                            curatorFramework.create().creatingParentsIfNeeded().forPath(nodePath, offsetBytes);
                        }
                }

                curatorFramework.close();
            }

        });

        wordsCount.print();

        return jsc;
    }


    public static Map<TopicAndPartition,Long> getConsumerOffsets(String zkServers, 
                String groupID, String topic) { 
        Map<TopicAndPartition,Long> retVals = new HashMap<TopicAndPartition,Long>();

        ObjectMapper objectMapper = new ObjectMapper();
        CuratorFramework  curatorFramework = CuratorFrameworkFactory.builder()
                .connectString(zkServers).connectionTimeoutMs(1000)
                .sessionTimeoutMs(10000).retryPolicy(new RetryUntilElapsed(1000, 1000)).build();

        curatorFramework.start();

        try{
        String nodePath = "/consumers/"+groupID+"/offsets/" + topic;
        if(curatorFramework.checkExists().forPath(nodePath)!=null){
            List<String> partitions=curatorFramework.getChildren().forPath(nodePath);
            for(String partiton:partitions){
                int partitionL=Integer.valueOf(partiton);
                Long offset=objectMapper.readValue(curatorFramework.getData().forPath(nodePath+"/"+partiton),Long.class);
                TopicAndPartition topicAndPartition=new TopicAndPartition(topic,partitionL);
                retVals.put(topicAndPartition, offset);
            }
        }
        }catch(Exception e){
            e.printStackTrace();
        }
        curatorFramework.close();

        return retVals;
    } 

    public static Map<TopicAndPartition,Long> getTopicOffsets(String zkServers, String topic){
        Map<TopicAndPartition,Long> retVals = new HashMap<TopicAndPartition,Long>();

        for(String zkServer:zkServers.split(",")){
        SimpleConsumer simpleConsumer = new SimpleConsumer(zkServer.split(":")[0], 
                Integer.valueOf(zkServer.split(":")[1]), 
                10000, 
                1024, 
                "consumer"); 
        TopicMetadataRequest topicMetadataRequest = new TopicMetadataRequest(Arrays.asList(topic));
        TopicMetadataResponse topicMetadataResponse = simpleConsumer.send(topicMetadataRequest);

        for (TopicMetadata metadata : topicMetadataResponse.topicsMetadata()) {
            for (PartitionMetadata part : metadata.partitionsMetadata()) {
                Broker leader = part.leader();
                if (leader != null) { 
                    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, part.partitionId()); 

                    PartitionOffsetRequestInfo partitionOffsetRequestInfo = new PartitionOffsetRequestInfo(kafka.api.OffsetRequest.LatestTime(), 10000); 
                    OffsetRequest offsetRequest = new OffsetRequest(ImmutableMap.of(topicAndPartition, partitionOffsetRequestInfo), kafka.api.OffsetRequest.CurrentVersion(), simpleConsumer.clientId()); 
                    OffsetResponse offsetResponse = simpleConsumer.getOffsetsBefore(offsetRequest); 

                    if (!offsetResponse.hasError()) { 
                        long[] offsets = offsetResponse.offsets(topic, part.partitionId()); 
                        retVals.put(topicAndPartition, offsets[0]);
                    }
                }
            }
        }
        simpleConsumer.close();
        }
        return retVals;
    }

    public static void main(String[] args)  throws Exception{
        JavaStreamingContextFactory factory = new JavaStreamingContextFactory() {
            public JavaStreamingContext create() {
              return createContext();
            }
          };

        JavaStreamingContext jsc = JavaStreamingContext.getOrCreate("/checkpoint", factory);

        jsc.start();

        jsc.awaitTermination();
        jsc.close();

    }

}

2、準備測試環境,並記錄目前consumer中的信息,如下圖:
這裏寫圖片描述
從界面上可以看到,目前所有的消息都已經被處理過了。
現在向kafka_direct中新增一個消息,如下圖:
這裏寫圖片描述

3、運行Spark Streaming 程序(注意:要先清空 checkpoint 目錄下的內容),觀察命令行輸出情況,及kafka manager中關於 spark-group的變化情況:
命令行輸出:
這裏寫圖片描述
打印出了,從zookeeper中讀取到的offset。

這裏寫圖片描述
打印出了,從Kafka的kafka_direct中消費的消息的結果數據。

這裏寫圖片描述
從圖片中,可以看到consumer offset 和 logSize 是一樣的。

4、下面我們人爲的將topic的partition的offset的值設置爲0,看其是否會打印出所有消息的結果數據。
(取消上面程序中註釋的部分即可)

5、再次運行Spark Streaming 程序(注意:要先清空 checkpoint 目錄下的內容),看命令行輸出效果:
這裏寫圖片描述
從圖中,可以看出Spark Streaming 程序將之前所有的測試消息都重新處理了一次。

至此,《“Spark Streaming + Kafka direct + checkpoints + 代碼改變” 引發的問題》的整個解決的過程都已經結束了。

說明:源代碼中,由於時間緊,沒有寫註釋,等後面有時間了再補上。

發佈了29 篇原創文章 · 獲贊 18 · 訪問量 6萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章