Kafka API 異步發送

Kafka API 異步發送

Producer API

  1. 消息發送流程

    Kafka的Producer發送消息採用的是異步發送的方式。在消息發送的過程中,涉及到了兩個線程——main線程和Sender線程,以及一個線程共享變量——RecordAccumulator。main線程將消息發送給RecordAccumulator,Sender線程不斷從RecordAccumulator中拉取消息發送到Kafka broker。

  2. 相關參數

    batch.size:只有數據積累到batch.size之後,sender纔會發送數據。
    linger.ms:如果數據遲遲未達到batch.size,sender等待linger.time之後就會發送數據。

    異步發送

    1. 導入依賴
    <dependencies>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.11.0.0</version>
        </dependency>
    </dependencies>
    
    1. 編寫代碼

    相關類:

    • KafkaProducer:需要創建一個生產者對象,用來發送對象;

    • ProducerConfig:獲取所需的一系列配置參數;

    • ProducerRecord:每條數據都要分裝成一個ProducerRecord對象。

      1. 不帶回調函數的API
      package codes.coffeecode.kafka.producer;
      
      import org.apache.kafka.clients.producer.*;
      import java.util.Properties;
      
      /**
       * 不帶回調函數的API
       */
      public class CustomProducer {
          public static void main(String[] args) {
              //1. 創建Kafka生產者的配置信息
              Properties properties = new Properties();
      
              //2. 指定連接的Kafka集羣  broker-list
              //ProducerConfig.BOOTSTRAP_SERVERS_CONFIG -- "bootstrap.servers"
              properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"hadoop141:9092");
      
              //3. ACK應答級別
              //ProducerConfig.ACKS_CONFIG -- "acks"
              properties.put(ProducerConfig.ACKS_CONFIG,"all");
      
              //4. 重試次數
              //ProducerConfig.RETRIES_CONFIG -- "retries"
              properties.put(ProducerConfig.RETRIES_CONFIG,1);
      
              //5. 批次大小 16k
              properties.put("batch.size",16384);
      
              //6. 等待時間 1ms
              properties.put("linger.ms",1);
      
              //7. Record Accumulator 緩衝區大小 32M
              properties.put("buffer.memory", 33554432);
      
              //8. key、value的序列化類
              properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
              properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
      
              //9. 創建生產者對象
              Producer<String,String> producer = new KafkaProducer<String, String>(properties);
      
              //10. 發送數據
              for (int i = 0; i < 10 ; i++){
                  producer.send(new ProducerRecord<String, String>("bigdata","CustomProducer--"+i));
              }
              
              //11. 關閉資源
              producer.close();
      
          }
      }
      
      1. 帶回調函數的API

        回調函數會在producer收到ack時調用,爲異步調用,該方法有兩個參數,分別是RecordMetadata和Exception,如果Exception爲null,說明消息發送成功,如果Exception不爲null,說明消息發送失敗。
        注意:消息發送失敗會自動重試,不需要我們在回調函數中手動重試。

        package codes.coffeecode.kafka.producer;
        
        import org.apache.kafka.clients.producer.*;
        
        import java.util.Properties;
        
        /**
         * 回調函數的API
         */
        public class CallBackProducer {
            public static void main(String[] args) {
                //1. 創建生產者配置信息
                Properties properties = new Properties();
                properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"hadoop141:9092");
        
                properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
                properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        
                //2. 創建生產者對象
                KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);
        
                //3. 發送數據
                for (int i = 0; i < 10; i++) {
                    producer.send(new ProducerRecord<String, String>("test","CallBack--" + i), new Callback() {
                        public void onCompletion(RecordMetadata metadata, Exception exception) {
                            if (exception == null){
                                System.out.println(metadata.partition()+"=="+metadata.offset());
                            }else {
                                exception.printStackTrace();
                            }
                        }
                    });
                }
        
                //4. 關閉資源
                producer.close();
            }
        }
        

    自定義Partitions

    1. 實現接口org.apache.kafka.clients.producer.Partitioner

    2. 自定義分區器代碼

    package codes.coffeecode.kafka.partitions;
    
    import org.apache.kafka.clients.producer.Partitioner;
    import org.apache.kafka.common.Cluster;
    
    import java.util.Map;
    
    /**
     * 自定義分區器
     */
    public class MyPartitions implements Partitioner {
        public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
            //自定義分區到2號分區
            return 2;
        }
    
        public void close() {
    
        }
    
        public void configure(Map<String, ?> configs) {
    
        }
    }
    
    
    1. 生產者代碼

      package codes.coffeecode.kafka.producer;
      
      import org.apache.kafka.clients.producer.*;
      
      import java.util.Properties;
      
      public class PartitionProducer {
          public static void main(String[] args) {
              //1. 創建生產者配置信息
              Properties properties = new Properties();
              properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"hadoop141:9092");
      
              properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
              properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
      
              // 添加自定義分區器
              properties.put(ProducerConfig.PARTITIONER_CLASS_CONFIG,"codes.coffeecode.kafka.partitions.MyPartitions");
      
              //2. 創建生產者對象
              KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);
      
              //3. 發送數據
              for (int i = 0; i < 10; i++) {
                  producer.send(new ProducerRecord<String, String>("test","CallBack--" + i), new Callback() {
                      public void onCompletion(RecordMetadata metadata, Exception exception) {
                          if (exception == null){
                              System.out.println(metadata.partition()+"=="+metadata.offset());
                          }else {
                              exception.printStackTrace();
                          }
                      }
                  });
              }
      
              //4. 關閉資源
              producer.close();
          }
      }
      

      結果

      2==9
      2==10
      2==11
      2==12
      2==13
      2==14
      2==15
      2==16
      2==17
      2==18
      

      查看更多:http://www.coffeecode.codes/


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章