Kafka(5)——JavaAPI十道練習題

以下kafka集羣的節點分別是node01,node02,node03
習題一:
在kafka集羣中創建student主題 副本爲2個,分區爲3個
	生產者設置:
	設置key的序列化爲  org.apache.kafka.common.serialization. StringSerializer
	設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
	其他都是默認設置
	消費者設置:
	消費者組id爲test
	設置key的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	設置value的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	其他都是默認設置
    模擬生產者,請寫出代碼向student主題中生產數據0-99
	模擬消費者,請寫出代碼把student主題中的數據0-99消費掉,打印輸出到控制檯


生產者答案代碼:

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_01 {

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
    KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>
            (props);
        for (int i = 0; i < 100; i++) {

        ProducerRecord record = new ProducerRecord("student", i+"");
        kafkaProducer.send(record);

    }
        kafkaProducer.close();
}
}



消費者答案代碼:

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class Consumer_01 {

    public static void main(String[] args) {

        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("student"));

        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println(consumerRecord.value());
            }
        }
}


}
習題二:
在kafka集羣中創建teacher主題 副本爲2個,分區爲3個
生產者設置:
消息確認機制 爲all
重試次數 爲2
批量處理消息字節數 爲16384
設置緩衝區大小 爲 33554432
設置每條數據生產延遲1ms
設置key的序列化爲org.apache.kafka.common.serialization.StringSerializer
設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
數據分發策略爲默認輪詢方式

消費者設置:
消費者組id爲test
設置自動提交偏移量
設置自動提交偏移量的時間間隔
設置 當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,從頭開始消費
auto.offset.reset

//earliest: 當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,從頭開始消費
//latest: 當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,消費新產生的該分區下的數據
//none : topic各分區都存在已提交的offset時,從offset後開始消費;只要有一個分區不存在已提交的offset,則拋出異常

	設置key的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	設置value的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	
	模擬生產者,請寫出代碼向teacher主題中生產數據bigdata0-bigdata99
	模擬消費者,請寫出代碼把teacher主題中的數據bigdata0-bigdata99消費掉 ,打印輸出到控制檯




生產者答案代碼:

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_02 {

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("acks", "all");
        props.put("retries", 2);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>
                (props);
        for (int i = 0; i < 100; i++) {
            ProducerRecord record = new ProducerRecord("teacher", "bigdata" + i);
            kafkaProducer.send(record);
        }
        kafkaProducer.close();
    }
}


消費者答案代碼:

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class Consumer_02 {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("auto.offset.reset","earliest");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("teacher"));
        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println(consumerRecord.value());
            }
        }


    }
}
習題三:
在kafka集羣中創建title主題 副本爲2個,分區爲3個
	生產者設置:
	消息確認機制 爲all
	重試次數 爲1
	批量處理消息字節數 爲16384
	設置緩衝區大小 爲 33554432
	設置每條數據生產延遲1ms
	設置key的序列化爲org.apache.kafka.common.serialization.StringSerializer
	設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
	數據分發策略爲指定數據key爲title,分發到同一個分區中
	
	消費者設置:
	消費者組id爲test
	設置自動提交偏移量
	設置自動提交偏移量的時間間隔
	設置 當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,消費新產生的該分區下的數據
	設置key的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	設置value的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	
	模擬生產者,請寫出代碼向title主題中生產數據kafka0-kafka99
	模擬消費者,請寫出代碼把title主題中的數據kafka0-kafka99消費掉 ,打印輸出到控制檯




生產者答案代碼:

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_03 {
    public static void main(String[] args) {


        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("acks", "all");
        props.put("retries", 1);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>
                (props);
        for (int i = 0; i < 100; i++) {
            ProducerRecord record = new ProducerRecord("title","title" ,"kafka" + i);
            kafkaProducer.send(record);
        }
        kafkaProducer.close();

    }
}




消費者答案代碼:


import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class Consumer_03 {

    public static void main(String[] args) {

        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("auto.offset.reset","latest");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("title"));
        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println(consumerRecord.value());
            }
        }
    }
}
習題四:
在kafka集羣中創建title主題 副本爲2個,分區爲3個
	生產者設置:
	消息確認機制 爲all
	重試次數 爲2
	批量處理消息字節數 爲16384
	設置緩衝區大小 爲 33554432
	設置每條數據生產延遲1ms
	設置key的序列化爲org.apache.kafka.common.serialization.StringSerializer
	設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
	數據分發策略爲指定分區2,把數據發送到指定的分區中
	
	消費者設置:
	消費者組id爲test
	設置自動提交偏移量
	設置自動提交偏移量的時間間隔
	設置 topic各分區都存在已提交的offset時,從offset後開始消費;只要有一個分區不存在已提交的offset,則拋出異常
	設置key的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	設置value的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	
	模擬生產者,請寫出代碼向title主題中生產數據test0-test99
	模擬消費者,請寫出代碼把title主題中的數據test0-test99消費掉 ,打印輸出到控制檯



生產者答案代碼:


import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_04 {

    public static void main(String[] args) {

        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("acks", "all");
        props.put("retries", 2);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>
                (props);
        for (int i = 0; i < 100; i++) {
            ProducerRecord record = new ProducerRecord("title","test" + i);
            kafkaProducer.send(record);
        }
        kafkaProducer.close();

    }
}




消費者答案代碼:



import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class Consumer_04 {
    public static void main(String[] args) {

        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("auto.offset.reset","none ");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("title"));
        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println(consumerRecord.value());
            }
        }
    }
}
習題五:
在kafka集羣中創建order主題 副本爲2個,分區爲3個
	生產者設置:
	消息確認機制 爲all
	重試次數 爲1
	批量處理消息字節數 爲16384
	設置緩衝區大小 爲 33554432
	設置每條數據生產延遲1ms
	設置key的序列化爲org.apache.kafka.common.serialization.StringSerializer
	設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
	數據分發策略爲自定義,請把生產的數據100以內的數據分發到分區0中,100-200以內的數據分發到分區1中,200-300內的數據分發到分區2中
	
	消費者設置:
	消費者組id爲test
	設置自動提交偏移量
	設置自動提交偏移量的時間間隔
	設置當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,從頭開始消費
	設置key的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	設置value的序列化爲org.apache.kafka.common.serialization. StringDeserializer
	
	模擬生產者,請寫出代碼向title主題中生產數據0-299
	模擬消費者,請寫出代碼把title主題中的數據0-299消費掉 ,打印輸出到控制檯


生產者答案代碼:

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_05 {

public static void main(String[] args) {
    Properties props = new Properties();
    props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
    props.put("acks", "all");
    props.put("retries", 1);
    props.put("batch.size", 16384);
    props.put("linger.ms", 1);
    props.put("buffer.memory", 33554432);
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("partitioner.class","HomeWork.ProducerPartition");

    KafkaProducer<String, String> producer = new KafkaProducer<>(props);
    for (int i = 0; i < 300; i++) {
        ProducerRecord record = new ProducerRecord("order", i + "");
        producer.send(record);
    }
    producer.close();
}
}



消費者答案代碼:


import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class Consumer05 {

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("auto.offset.reset","earliest");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("order"));
        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println("數據:"+consumerRecord.value()+"    分區:"+consumerRecord.partition());
            }
        }
    }
}



自定義分區代碼:


import org.apache.kafka.clients.producer.Partitioner;
import org.apache.kafka.common.Cluster;

import java.util.Map;

public class Partition implements Partitioner {


    @Override
    public int partition(String s, Object o, byte[] bytes, Object o1, byte[] bytes1, Cluster cluster) {

        int a = Integer.parseInt((String) o1);

        if (a<=100){
            return 0;
        }
        if (a>100&&a<=200){
            return 1;
        }

        return 2;
    }

    @Override
    public void close() {

    }

    @Override
    public void configure(Map<String, ?> map) {

    }
}
習題六:
在kafka集羣中創建18BD-10主題 副本爲2個,分區爲3個
	生產者設置:
	消息確認機制 爲all
	重試次數 爲2
	批量處理消息字節數 爲16384
	設置緩衝區大小 爲 33554432
	設置每條數據生產延遲1ms
	設置key的序列化爲org.apache.kafka.common.serialization.StringSerializer
	設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
	數據分發策略爲指定分區2,把數據發送到指定的分區中
	
	消費者設置:
	消費者組id爲test
	設置自動提交偏移量
	設置自動提交偏移量的時間間隔
	設置 topic各分區都存在已提交的offset時,從offset後開始消費;只要有一個分區不存在已提交的offset,則拋出異常
	設置key的反序列化爲org.apache.kafka.common.serialization.StringDeserializer
	設置value的反序列化爲org.apache.kafka.common.serialization.StringDeserializer
	消費指定分區2中的數據

	模擬生產者,請寫出代碼向18BD-10主題中生產數據test0-test99
	模擬消費者,請寫出代碼把18BD-10主題中的2號分區的數據消費掉 ,打印輸出到控制檯



生產者答案代碼:


import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_06 {

    public static void main(String[] args) {


        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("acks", "all");
        props.put("retries", 2);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>
                (props);
        for (int i = 0; i < 100; i++) {

            ProducerRecord record = new ProducerRecord("18BD-10",2,"test", "test"+i);
            kafkaProducer.send(record);

        }
        kafkaProducer.close();

    }
}



消費者答案代碼:



import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;

import java.util.Arrays;
import java.util.Properties;


public class Consumer_06 {

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("auto.offset.reset","latest");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        TopicPartition topicPartition = new TopicPartition("18BD-10", 2);
        consumer.assign(Arrays.asList(topicPartition));
        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println("  自定義分區:"+consumerRecord.partition()+consumerRecord.value());
            }
        }
    }
}
習題七:
在kafka集羣中創建18BD-20主題 副本爲2個,分區爲3個
	生產者設置:
	消息確認機制 爲all
	重試次數 爲1
	批量處理消息字節數 爲16384
	設置緩衝區大小 爲 33554432
	設置每條數據生產延遲1ms
	設置key的序列化爲org.apache.kafka.common.serialization.StringSerializer
	設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
	數據分發策略爲輪詢方式發送到每個分區中
	手動提交每條數據
	
	消費者設置:
	消費者組id爲test
	設置手動提交偏移量
	設置key的反序列化爲org.apache.kafka.common.serialization.StringDeserializer
	設置value的反序列化爲org.apache.kafka.common.serialization.StringDeserializer

	模擬生產者,請寫出代碼向18BD-20主題中生產數據test0-test99
	模擬消費者,請寫出代碼把18BD-20主題中的2號分區的數據消費掉 ,打印輸出到控制檯



生產者答案代碼:


import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_07 {

    public static void main(String[] args) {
               Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("acks", "all");
        props.put("retries", 1);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>
                (props);
        for (int i = 0; i < 100; i++) {

            ProducerRecord record = new ProducerRecord("18BD-20", "test"+i);
            kafkaProducer.send(record);

        }
        kafkaProducer.close();

    }
}



消費者答案代碼:

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;

import java.util.Arrays;
import java.util.Properties;

public class Consumer_07 {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "false");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        TopicPartition topicPartition = new TopicPartition("18BD-20", 2);
        consumer.assign(Arrays.asList(topicPartition));
        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println(" 分區:"+consumerRecord.partition()+"    "+consumerRecord.value());
            }
            consumer.commitAsync();
        }
    }
}
習題八:
在kafka集羣中創建18BD-30主題 副本爲2個,分區爲3個
	生產者設置:
	消息確認機制 爲all
	重試次數 爲1
	批量處理消息字節數 爲16384
	設置緩衝區大小 爲 33554432
	設置每條數據生產延遲1ms
	設置key的序列化爲org.apache.kafka.common.serialization.StringSerializer
	設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
	數據分發策略爲輪詢方式發送到每個分區中
	
	消費者設置:
	消費者組id爲test
	設置手動提交偏移量
	設置key的反序列化爲org.apache.kafka.common.serialization.StringDeserializer
	設置value的反序列化爲org.apache.kafka.common.serialization.StringDeserializer
	依次消費完每個分區之後手動提交offset
	
	
	模擬生產者,請寫出代碼向18BD-30主題中生產數據test0-test99
	模擬消費者,請寫出代碼把18BD-30主題中的2號分區的數據消費掉 ,打印輸出到控制檯


生產者答案代碼:

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_08 {

    public static void main(String[] args) {


        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("acks", "all");
        props.put("retries", 1);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>
                (props);
        for (int i = 0; i < 100; i++) {

            ProducerRecord record = new ProducerRecord("18BD-30", "test"+i);
            kafkaProducer.send(record);

        }
        kafkaProducer.close();

    }
}


消費者答案代碼:


import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;

import java.util.Arrays;
import java.util.Properties;

public class Consumer_08 {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "false");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        TopicPartition topicPartition = new TopicPartition("18BD-30", 2);
        consumer.assign(Arrays.asList(topicPartition));
        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println(" 分區:"+consumerRecord.partition()+"    "+consumerRecord.value());
            }
            consumer.commitAsync();
        }
    }
}
習題九:
在kafka集羣中創建18BD-40主題 副本爲2個,分區爲3個
	生產者設置:
	消息確認機制 爲all
	重試次數 爲1
	批量處理消息字節數 爲16384
	設置緩衝區大小 爲 33554432
	設置每條數據生產延遲1ms
	設置key的序列化爲org.apache.kafka.common.serialization.StringSerializer
	設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
	數據分發策略爲輪詢方式發送到每個分區中
	
	消費者設置:
	消費者組id爲test
	設置自動提交偏移量
	設置當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,從頭開始消費
	設置key的反序列化爲org.apache.kafka.common.serialization.StringDeserializer
	設置value的反序列化爲org.apache.kafka.common.serialization.StringDeserializer
	消費指定分區0和分區2中的數據
	
	模擬生產者,請寫出代碼向18BD-40主題中生產數據test0-test99
	模擬消費者,請寫出代碼把18BD-40主題中的0和2號分區的數據消費掉 ,打印輸出到控制檯



生產者答案代碼:


import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_09 {

    public static void main(String[] args) {


        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("acks", "all");
        props.put("retries", 1);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>
                (props);
        for (int i = 0; i < 100; i++) {

            ProducerRecord record = new ProducerRecord("18BD-40", "test"+i);
            kafkaProducer.send(record);

        }
        kafkaProducer.close();

    }
}



消費者答案代碼:


import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;

import java.util.Arrays;
import java.util.Properties;

public class Consumer_09 {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("auto.offset.reset","earliest");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        TopicPartition topicPartition = new TopicPartition("18BD-40", 0);
        TopicPartition topicPartition1 = new TopicPartition("18BD-40", 2);
        consumer.assign(Arrays.asList(topicPartition,topicPartition1));

        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println(" 分區:"+consumerRecord.partition()+"    "+consumerRecord.value());
            }
            consumer.commitAsync();
        }
    }
}
習題十:
在kafka集羣中創建18BD-50主題 副本爲2個,分區爲3個
	生產者設置:
	消息確認機制 爲all
	重試次數 爲1
	批量處理消息字節數 爲16384
	設置緩衝區大小 爲 33554432
	設置每條數據生產延遲1ms
	設置key的序列化爲org.apache.kafka.common.serialization.StringSerializer
	設置value的序列化爲org.apache.kafka.common.serialization.StringSerializer
	數據分發策略爲輪詢方式發送到每個分區中
	
	消費者設置:
	消費者組id爲test
	設置自動提交偏移量
	設置當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,從頭開始消費
	設置key的反序列化爲org.apache.kafka.common.serialization.StringDeserializer
	設置value的反序列化爲org.apache.kafka.common.serialization.StringDeserializer
	消費指定分區0和分區2中的數據,並且設置消費0分區的數據offerset值從0開始,消費2分區的數據offerset值從10開始
	
	模擬生產者,請寫出代碼向18BD-50主題中生產數據test0-test99
	模擬消費者,請寫出代碼把18BD-50主題中的0和2號分區的數據消費掉 ,打印輸出到控制檯


生產者答案代碼:


import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class Producer_10 {

    public static void main(String[] args) {


        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("acks", "all");
        props.put("retries", 1);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>
                (props);
        for (int i = 0; i < 100; i++) {

            ProducerRecord record = new ProducerRecord("18BD-50", "test"+i);
            kafkaProducer.send(record);

        }
        kafkaProducer.close();

    }
}





消費者答案代碼:


import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;

import java.util.Arrays;
import java.util.Properties;

public class Consumer_10 {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "node01:9092,node02:9092,node03:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms",  "1000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("auto.offset.reset","earliest");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        TopicPartition topicPartition0 = new TopicPartition("18BD-50", 0);
        TopicPartition topicPartition2 = new TopicPartition("18BD-50", 2);
        consumer.assign(Arrays.asList(topicPartition0,topicPartition2));
        consumer.seek(topicPartition0,0);
        consumer.seek(topicPartition2,10);
        while (true){
            ConsumerRecords<String, String> consumerRecords = consumer.poll(1000);
            for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
                System.out.println(" offset:"+consumerRecord.offset()+"  分區: "+consumerRecord.partition()+"   "+consumerRecord.value());
            }
        }

    }
}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章