docker搭建Kafka集羣及監控、可視化部署實戰

下載zookeeper鏡像

docker pull wurstmeister/zookeeper

下載kafka鏡像

docker pull wurstmeister/kafka

啓動zk鏡像生成容器

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2  --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime wurstmeister/zookeeper

啓動kafka1鏡像生成容器

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kafka  -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.31.131:2181   -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.31.131:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092   -v /etc/localtime:/etc/localtime wurstmeister/kafka

啓動kafka2鏡像生成容器

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kafka  -p 9093:9093 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=192.168.31.131:2181   -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.31.131:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093   -v /etc/localtime:/etc/localtime wurstmeister/kafka

查看docker進程

docker ps -a

進入kafka docker進程中,就可以使用命令操作kafka

docker exec -it kafka bash

用代碼操作kafka
生產消息

public class KafkaProducerService {
    public static Properties props = new Properties();
    static {
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.31.131:9092,192.168.31.131:9093");
        props.put(ProducerConfig.ACKS_CONFIG,"all");
        props.put(ProducerConfig.RETRIES_CONFIG,"3");
        props.put(ProducerConfig.BATCH_SIZE_CONFIG,"16384");
        props.put(ProducerConfig.LINGER_MS_CONFIG,"1");
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG,"33554432");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
    }
    public static Runnable runnable = () -> {
        try {
            Thread.sleep(5000);
            Producer<String,String> producer = new KafkaProducer<>(props);
            for(int i=0;i<1000;i++){
                ProducerRecord<String,String> record =
                        new ProducerRecord<>(topic,"key-"+i,"kafka-value-"+i);
                producer.send(record, (recordMetadata, e) -> {
                      if (e==null){
                        System.out.println("消息發送成功");
                        System.out.println("partition : "+recordMetadata.partition()+" , offset : "+recordMetadata.offset()+",topic"+recordMetadata.topic());
                    }else {
                        System.out.println("消息發送失敗");
                    }
                });
            }
            // 所有的通道打開都需要關閉
            producer.close();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

    };
}

消費消息

@Slf4j
public class KafkaConsumerService {
    public static Properties props = new Properties();
    static {
        props.put("bootstrap.servers","192.168.31.131:9092,192.168.31.131:9093");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("key.deserializer", StringDeserializer.class.getName());
        props.put("value.deserializer", StringDeserializer.class.getName());
        props.put("auto.offset.reset", "latest");
        props.put("deserializer.encoding", "UTF-8");
    }

   public static Runnable runnable = () -> {
            KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
            consumer.subscribe(Arrays.asList(topic));
            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(10000);
                records.partitions().forEach(topicPartition -> {
                    List<ConsumerRecord<String, String>> partitionRecords = records.records(topicPartition);
                    partitionRecords.forEach(record -> {
                        log.info("kafka的消費日誌{}",record.toString());
                    });
                });
            }
    }
}

kafka可視化工具 offsetexplorer

下載地址:http://www.kafkatool.com/download.html

kafka監控工具 Kafka Eagle

下載地址:http://download.kafka-eagle.org/
解壓出來的路徑:/usr/local/kafka-eagle-web-2.0.6
修改配置

vim /usr/local/kafka-eagle-web-2.0.6/conf/system-config.properties

修改的地方是cluster1.zk.list和kafka.eagle.url

kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=192.168.31.131:2181
....
kafka.eagle.webui.port=8048
kafka.eagle.url=jdbc:sqlite:/usr/local/kafka-eagle-web-2.0.6/db/ke.db

添加環境變量

vim ~/.bash_profile

export KE_HOME=/usr/local/kafka-eagle-web-2.0.6
export PATH=$KE_HOME/bin:$PATH

source ~/.bash_profile

進入kafka docker中修改kafka-server-sta

docker exec -it kafka bash
docker exec -it kafka2 bash
cd /opt/kafka_2.13-2.7.0/bin/
vim kafka-server-start.sh

添加配置

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    # 這裏的端口不一定非要設置成9999,端口只要可用,均可。
    export JMX_PORT="9999" 
    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi

啓動程序

chmod a+x /usr/local/kafka-eagle-web-2.0.6/bin/*
./ke.sh start
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章