kafka2.0-事物發送(the transactional producer)_10

事務
事務是Kafka 0.11開始引入的新特性。類似於數據庫事務,只是這裏的數據源是Kafka,kafka事務屬性是指一系列的生產者生產消息和消費者提交offset的操作在一個事務,或者說是是一個原子操作),同時成功或者失敗。

事務入門例子

生產者如下,發送兩條消息,添加事物的意思就是要麼同時發送成功,要麼都不成功。

/**
 * 多次生成消息類型的事物
 */
public class OnlyWriteProducer {
    protected static final Logger logger = LoggerFactory.getLogger(OnlyWriteProducer.class);
    public static final String TOPIC_NAME = "producer-0"; 

    public static void main(String[] args) {
         Producer<String, User> producer = new KafkaTransactionBuilder<String, User, byte[]>().buildProducer();
         //初始化事物
         producer.initTransactions();
         //開始事物
         producer.beginTransaction();

         try{
             User user = new User(101L,"kafka","[email protected]",1);
             producer.send(new ProducerRecord<String, User>(TOPIC_NAME, Long.toString(user.getId()), user));

             User user2 = new User(102L,"netty","[email protected]",0);
             producer.send(new ProducerRecord<String, User>(TOPIC_NAME, Long.toString(user2.getId()), user2));
             //提交事物
             producer.commitTransaction();
         }catch(Exception e){
             logger.error("kafka消息發送異常!",e);
             //停止事物
             producer.abortTransaction();
         }

         producer.close();
    }
}

消費者如下,消費者沒有使用事物,但是消費者採用的是手動提交:

public class TransactionConsumer {
    protected static final Logger logger = LoggerFactory.getLogger(TransactionConsumer.class);
    private static boolean isClose = false;


    public  static void main(String args[]){
        KafkaTransactionBuilder<String, User, byte[]> builder = new KafkaTransactionBuilder<String, User, byte[]>();
        Consumer<String, byte[]> consumer = builder.buildConsumer();

        consumer.subscribe(Arrays.asList(OnlyWriteProducer.TOPIC_NAME));
        try{
            while (!isClose) {
                ConsumerRecords<String, byte[]> records = consumer.poll(Duration.ofMillis(100));
                for (ConsumerRecord<String, byte[]> record : records)
                    System.out.printf("key = %s,offset=%s partition=%s value = %s%n", record.key(),record.offset(),record.partition(), new User(record.value()));

                //手動提交
                consumer.commitAsync();
            }   

        }catch(Exception e){
            logger.error("kafka消息消費異常!",e);
        }
        consumer.close();
    }
}

生產者和消費者配置如下,要使用事物生產者必須配置transactional.id,同時enable.idempotence需要設置爲true:

public class KafkaTransactionBuilder<T,P,C> extends KafkaBuilder<T,P,C>{

    /**
     * 構建生產者
     */
    @Override
    public KafkaProducer<T,P> buildProducer(){
        return buildProducer("default-transaction");
    }

    public KafkaProducer<T,P> buildProducer(String transactional_id){
        Properties props = new Properties();
        props.put("bootstrap.servers", servers);
        props.put("batch.size", default_batch_size);
        props.put("buffer.memory", default_buffer_size);

        props.put("key.serializer", default_serializer);
        props.put("value.serializer", "com.yang.kafka.serialization.ProtobufSerializer");
        // 設置事物ID
        props.put("transactional.id", transactional_id);

        props.put("acks", "all");
        props.put("retries", 3);
        props.put("enable.idempotence", true);
        props.put("linger.ms", 1);
        return new KafkaProducer<>(props);
    }


    /**
     * 構建消費者
     */
    @Override
    public KafkaConsumer<T,C> buildConsumer(){
        return buildConsumer(default_group_id);
    }

    public KafkaConsumer<T,C> buildConsumer(String gourpId){
        Properties props = new Properties();
        props.put("bootstrap.servers", servers);
        props.put("group.id", gourpId);
        props.put("key.deserializer", default_deserializer);
        props.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");

        // 設置隔離級別
        props.put("isolation.level", "read_committed");
        // 關閉自動提交
        props.put("enable.auto.commit", false);

        return new KafkaConsumer<>(props);
    }
}

KafkaBuilder:

public class KafkaBuilder<T,P,C> {
    protected static final String servers = "192.168.1.3:9092,192.168.1.128:9092,192.168.1.130:9092";

    protected static final String default_serializer = "org.apache.kafka.common.serialization.StringSerializer";
    protected static final String default_deserializer = "org.apache.kafka.common.serialization.StringDeserializer";

    protected static final int default_buffer_size = 33554432;
    protected static final int default_batch_size = 16384;
    protected static final String default_group_id = "test";

    public Producer<T,P> buildProducer(){
        Properties props = new Properties();
        props.put("bootstrap.servers", servers);
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", default_batch_size);
        props.put("linger.ms", 1);
        props.put("buffer.memory", default_buffer_size);
        props.put("key.serializer", default_serializer);
        props.put("value.serializer", default_serializer);
        return new KafkaProducer<>(props);
    }

    public Consumer<T,C> buildConsumer(){
        Properties props = new Properties();
        props.put("bootstrap.servers", servers);
        props.put("group.id", default_group_id);
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("key.deserializer", default_deserializer);
        props.put("value.deserializer", default_deserializer);
        return new KafkaConsumer<>(props);
    }

}

簡單的來說,kafka事物分爲三種,我個人認爲有意義的只有1,3兩種:

  1. 在一個事物中,多次的發送消息
  2. 在一個事物中,多次的消費消息(其實這個沒什麼意義)
  3. 在一個事物中,即消費消息,又生產消息。這時候分兩種情況,一種是消費再生產,這種模式我們稱爲consume-transform-produce,另一種是先生產再消費,其實這種沒有任何意義,結合實際情況想一想就能明白,所以我們常說的,即消費消息,又生產消息,指的是consume-transform-produce

consume-transform-produce模式
這裏實現了一個這樣的例子:接收到用戶信息(user),但是用戶信息是不帶email的,所以,我們這裏將接收到的用戶信息添加email之後,在發送到另一個topic中。

public class ConsumeTransformProduce {
    protected static final Logger logger = LoggerFactory.getLogger(ConsumeTransformProduce.class);

    public static final String TOPIC_NAME = "producer-1"; 
    public static final String GROUP_ID = "consume_transform_produce"; 

    private static boolean isClose = false;

    public static void main(String[] args) {
        KafkaTransactionBuilder<String, User, byte[]> builder = new KafkaTransactionBuilder<String, User, byte[]>();
        Consumer<String, byte[]> consumer = builder.buildConsumer(GROUP_ID);
        Producer<String, User> producer = builder.buildProducer("producer-1-transaction");
        // 初始化事物
        producer.initTransactions();

        /** 訂閱producer-0 **/
        consumer.subscribe(Arrays.asList(OnlyWriteProducer.TOPIC_NAME));

        while (!isClose) {
            // 開始事物
            producer.beginTransaction();

            try {
                ConsumerRecords<String, byte[]> records = consumer.poll(Duration.ofMillis(100));

                Map<TopicPartition, OffsetAndMetadata> commits = Maps.newHashMap();
                for (ConsumerRecord<String, byte[]> record : records){
                    User user = new User(record.value());
                    System.out.printf("key = %s,offset=%s partition=%s value = %s%n",record.key(), record.offset(), record.partition(),user);

                    commits.put(new TopicPartition(record.topic(), record.partition()),new OffsetAndMetadata(record.offset()));

                    user.setEmail("[email protected]");
                    /** 發送producer-1 **/
                    producer.send(new ProducerRecord<String, User>(TOPIC_NAME, Long.toString(user.getId()), user));
                }
                // 提交Offset
                producer.sendOffsetsToTransaction(commits, GROUP_ID);
                // 提交事務
                producer.commitTransaction();
            } catch (Exception e) {
                logger.error("kafka消息發送異常!", e);
                // 停止事物
                producer.abortTransaction();
            }
        }

        producer.close();
    }
}

示例源碼:https://github.com/Mryangtaofang/sample

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章