docker-compose搭建Zookeeper集羣以及Kafka集羣

一、安裝docker-compose

這裏不使用官方鏈接進行安裝,因爲會很慢
https://github.com/docker/compose/releases
可以前往官網查看目前最新版,然後下面自行更換

curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
 
sudo chmod +x /usr/local/bin/docker-compose
 
 #驗證是否安裝成功
docker-compose --version

準備工作:

#創建兩個文件夾分別存放docker-compose.yml文件,方便管理
cd /usr/local
mkdir docker
cd docker
mkdir zookeper
mkdir kafka

因爲考慮到有時候只需要啓動zookeeper而並不需要啓動kafka,例如:使用Dubbo,SpringCloud的時候利用Zookeeper當註冊中心。所以本次安裝分成兩個docker-compose.yml來安裝和啓動

二、搭建zookeeper集羣

cd /usr/local/docker/zookeeper
vim docker-compose.yml

docker-compose.yml文件內容

version: '3.3'

services:
  zoo1:
    image: zookeeper
    restart: always
    hostname: zoo1
    ports:
      - 2181:2181
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo2:
    image: zookeeper
    restart: always
    hostname: zoo2
    ports:
      - 2182:2181
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo3:
    image: zookeeper
    restart: always
    hostname: zoo3
    ports:
      - 2183:2181
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181

參考:

DockerHub zookeeper鏈接:https://hub.docker.com/_/zookeeper
接着:

# :wq 保存退出之後
cd /usr/local/docker/zookeeper #確保docker-compose.yml在當前目錄,且自己目前也在當前目錄
docker-compose up -d
#等待安裝和啓動
docker ps #查看容器狀態
#參考命令
docker-compose ps #查看集羣容器狀態
docker-compose stop #停止集羣容器
docker-compose restart #重啓集羣容器

在這裏插入圖片描述
如圖代表zookeeper已經安裝以及啓動成功,可自行使用端口掃描工具掃描,等待kafka安裝成功以後集中測試

三、Kafka集羣搭建

確保已經搭建完成zookeeper環境

cd /usr/local/docker/kafka
vim dokcer-compose.yml

docker-compose.yml內容:

version: '2'
services:
  kafka1:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.0.1                    ## 修改:宿主機IP
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.1:9092    ## 修改:宿主機IP
      KAFKA_ZOOKEEPER_CONNECT: 192.168.0.1:2181, 192.168.0.1:2182, 192.168.0.1:2183 #剛剛安裝的zookeeper宿主機IP以及端口
      KAFKA_ADVERTISED_PORT: 9092
    container_name: kafka1
  kafka2:
    image: wurstmeister/kafka
    ports:
      - "9093:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.0.1                   ## 修改:宿主機IP
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.1:9093        ## 修改:宿主機IP
      KAFKA_ZOOKEEPER_CONNECT: 192.168.0.1:2181, 192.168.0.1:2182, 192.168.0.1:2183 #剛剛安裝的zookeeper宿主機IP以及端口
      KAFKA_ADVERTISED_PORT: 9093
    container_name: kafka2
  kafka-manager:
    image: sheepkiller/kafka-manager              ## 鏡像:開源的web管理kafka集羣的界面
    environment:
        ZK_HOSTS: 192.168.0.1                 ## 修改:宿主機IP
    ports:
      - "9000:9000"                               ## 暴露端口

kafka-manager可以自行選擇是否安裝,不需要安裝去除即可
接着:

# :wq 保存退出之後
cd /usr/local/docker/kafka#確保docker-compose.yml在當前目錄,且自己目前也在當前目錄
docker-compose up -d
#等待安裝和啓動
docker ps #查看容器狀態
#參考命令
docker-compose ps #查看集羣容器狀態
docker-compose stop #停止集羣容器
docker-compose restart #重啓集羣容器

在這裏插入圖片描述
如圖代表kafka已經安裝以及啓動成功,接下來進行測試

四、使用Java代碼進行測試

(1)引入pom.xml依賴

        <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.12</artifactId>
            <version>2.1.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.alibaba/fastjson -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.62</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.10</version>
        </dependency>

(2)創建pojo對象以及Consumer和Producter

User對象


import lombok.Data;

@Data
public class User {
    private String id;
    private String name;
}

Producter

import java.util.Properties;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import com.alibaba.fastjson.JSON;

public class CollectKafkaProducer {

    // 創建一個kafka生產者
    private final KafkaProducer<String, String> producer;
    // 定義一個成員變量爲topic
    private final String topic;

    // 初始化kafka的配置文件和實例:Properties & KafkaProducer
    public CollectKafkaProducer(String topic) {
        Properties props = new Properties();
        // 配置broker地址
        props.put("bootstrap.servers", "192.168.0.1:9092");
        // 定義一個 client.id
        props.put("client.id", "demo-producer-test");

        // 其他配置項:

//		props.put("batch.size", 16384);			//16KB -> 滿足16KB發送批量消息
//		props.put("linger.ms", 10); 			//10ms -> 滿足10ms時間間隔發送批量消息
//		props.put("buffer.memory", 33554432);	 //32M -> 緩存提性能

        // kafka 序列化配置:
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        // 創建 KafkaProducer 與 接收 topic
        this.producer = new KafkaProducer<>(props);
        this.topic = topic;
    }

    // 發送消息 (同步或者異步)
    public void send(Object message, boolean syncSend) throws InterruptedException {
        try {
            // 同步發送
            if(syncSend) {
                producer.send(new ProducerRecord<>(topic, JSON.toJSONString(message)));
            }
            // 異步發送(callback實現回調監聽)
            else {
                producer.send(new ProducerRecord<>(topic,
                                JSON.toJSONString(message)),
                        new Callback() {
                            @Override
                            public void onCompletion(RecordMetadata recordMetadata, Exception e) {
                                if (e != null) {
                                    System.err.println("Unable to write to Kafka in CollectKafkaProducer [" + topic + "] exception: " + e);
                                }
                            }
                        });
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    // 關閉producer
    public void close() {
        producer.close();
    }

    // 測試函數
    public static void main(String[] args) throws InterruptedException {
        String topic = "topic1";
        CollectKafkaProducer collectKafkaProducer = new CollectKafkaProducer(topic);

        for(int i = 0 ; i < 10; i ++) {
            User user = new User();
            user.setId(i+"");
            user.setName("張三");
            collectKafkaProducer.send(user, true);
        }

        Thread.sleep(Integer.MAX_VALUE);

    }

}

Consumer

import java.util.Collections;
import java.util.List;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;


import lombok.extern.slf4j.Slf4j;

@Slf4j
public class CollectKafkaConsumer {

    // 定義消費者實例
    private final KafkaConsumer<String, String> consumer;
    // 定義消費主題
    private final String topic;


    // 消費者初始化
    public CollectKafkaConsumer(String topic) {
        Properties props = new Properties();
        // 消費者的zookeeper 地址配置
        props.put("zookeeper.connect", "192.168.0.1:2181");
        // 消費者的broker 地址配置
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.1:9092");
        // 消費者組定義
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "demo-group-id");
        // 是否自動提交(auto commit,一般生產環境均設置爲false,則爲手工確認)
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
        // 自動提交配置項
//		props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
        // 消費進度(位置 offset)重要設置: latest,earliest
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        // 超時時間配置
        props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");
        // kafka序列化配置
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        // 創建consumer對象 & 賦值topic
        consumer = new KafkaConsumer<>(props);
        this.topic = topic;
        // 訂閱消費主題
        consumer.subscribe(Collections.singletonList(topic));

    }

    // 循環拉取消息並進行消費,手工ACK方式
    private void receive(KafkaConsumer<String, String> consumer) {
        while (true) {
            // 	拉取結果集(拉取超時時間爲1秒)
            ConsumerRecords<String, String> records = consumer.poll(1000);
            //  拉取結果集後獲取具體消息的主題名稱 分區位置 消息數量
            for (TopicPartition partition : records.partitions()) {
                List<ConsumerRecord<String, String>> partitionRecords = records.records(partition);
                String topic = partition.topic();
                int size = partitionRecords.size();
                log.info("獲取topic:{},分區位置:{},消息數爲:{}", topic, partition.partition(), size);
                // 分別對每個partition進行處理
                for (int i = 0; i< size; i++) {
                    System.err.println("-----> value: " + partitionRecords.get(i).value());
                    long offset = partitionRecords.get(i).offset() + 1;
                    // consumer.commitSync(); // 這種提交會自動獲取partition 和 offset
                    // 這種是顯示提交partition 和 offset 進度
                    consumer.commitSync(Collections.singletonMap(partition,
                            new OffsetAndMetadata(offset)));
                    log.info("同步成功, topic: {}, 提交的 offset: {} ", topic, offset);
                }

            }
        }
    }

    // 測試函數
    public static void main(String[] args) {
        String topic = "topic1";
        CollectKafkaConsumer collectKafkaConsumer = new CollectKafkaConsumer(topic);
        collectKafkaConsumer.receive(collectKafkaConsumer.consumer);
    }
}

先啓動Producter,隨後啓動Consumer
在這裏插入圖片描述
成功消費消息,所有配置OK

注意:所有IP請結合自己實際使用,本人安裝時是使用的服務器IP,故文章內統一改成了192.168.0.1

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章