Java獲取指定topic每個分區的當前偏移量

Java獲取指定topic每個分區的當前偏移量

首先引入pom.xml

<dependencies>
	<dependency>
		<groupId>org.springframework.kafka</groupId>
		<artifactId>spring-kafka</artifactId>
		<!--<version>2.1.10.RELEASE</version>-->
	</dependency>
</dependencies>

配置properties.properties文件

因爲我們需要獲取的是每個消費者消費的topic的每個分區的當前偏移量,所以在properties配置文件中只需要配置消費者即可
不同的group_id消費相同的topic,當前的偏移量也會不一樣的

#kafka消費者配置
spring.kafka.consumer.bootstrap-servers=
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit= false
spring.kafka.consumer.fetch-max-wait=30s
spring.kafka.consumer.fetch-min-size=
spring.kafka.consumer.group-id=
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.max-poll-records=500

執行獲取topic每個分區的當前偏移量代碼

import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.TopicPartition;

import java.util.*;

/*
 * @version 1.0 created by LXW on 2019/11/20 10:20
 */
public class KafkaUtil {


    /**
     * 獲取當前topic下的全部分區的偏移量信息
     *
     * @param properties 配置信息
     * @param partitions Collection<TopicPartition> partitions
     * @return {partition:offset}
     */
    public static Map<TopicPartition, Long> getPartitionsOffset(Map<String, Object> properties, Collection<TopicPartition>
            partitions) {
        KafkaConsumer consumer = new KafkaConsumer(properties);
        try {
            Map<TopicPartition, Long> endOffsets = consumer.endOffsets(partitions);
            return endOffsets;
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }finally {
            consumer.close();
        }
    }


    /**
     * 獲取當前服務消費的topic的每個分區的當前偏移量
     *
     * @param properties 配置信息
     * @param topics Collection<String> topics
     * @return {
     *          topic:
     *           {
     *             partitionInfo:offset
     *           }
     *         }
     */
    public static Map<String, Map<TopicPartition, Long>> getTopicPartitionsOffset(Map<String, Object> properties, Set<String> topics){
        Map<String, Map<TopicPartition, Long>> topicPartitionMap = new HashMap<>();
        KafkaConsumer kafkaConsumer = new KafkaConsumer(properties);
        try {
            for (String topic : topics) {
                List<PartitionInfo> partitionsInfo = kafkaConsumer.partitionsFor(topic);
                Set<TopicPartition> topicPartitions = new HashSet<>();
                for (PartitionInfo partitionInfo : partitionsInfo) {
                    TopicPartition topicPartition = new TopicPartition(partitionInfo.topic(), partitionInfo.partition());
                    topicPartitions.add(topicPartition);
                }
                Map<TopicPartition, Long> topicPartitionsOffset = getPartitionsOffset(properties, topicPartitions);
                topicPartitionMap.put(topic, topicPartitionsOffset);
            }
            return topicPartitionMap;
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }finally {
            kafkaConsumer.close();
        }
    }

}

How to use

    public static void main(String[] args) {
        Set topics = new HashSet(Arrays.asList("test1", "test2"));
        KafkaProperties kafkaProperties = new KafkaProperties();
        kafkaProperties.setBootstrapServers(Arrays.asList("127.0.0.1:9200"));
        kafkaProperties.setClientId("clientId");
        Map<String, Object> consumerProperties = kafkaProperties.buildConsumerProperties();
        Map<String, Map<TopicPartition, Long>> serviceTopicPartitionsOffset = KafkaUtil.getTopicPartitionsOffset(consumerProperties, topics);
        // TODO waht you want to do
    }

其中properties可以直接通過properties文件自動注入的方式自動加載進去

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章