在安裝 Kafka 時我們先給 ZooKeeper 安裝跑起來
- 下載地址: https://zookeeper.apache.org/releases.html
wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
解壓併到zookeeper目錄中
tar -zxvf zookeeper-3.4.14.tar.gz
cd zookeeper-3.4.14
將 zookeeper 提供的簡單實例備份成:
zoo.cfg
cp conf/zoo_sample.cfg conf/zoo.cfg
單機模式可以使用默認配置集羣模式需要修改部分配置
vim conf/zoo.cfg
# 心跳時間, 時長單位爲毫秒
tickTime=2000
# 存儲快照文件的目錄
dataDir=/tmp/zookeeper
# zookeeper的TCP端口
clientPort=2181
# follower與leader同步的最長時間
initLimit=5
# follower和leader之間響應的最長時間
syncLimit=2
server.1=<ip>:<port>:<port>
server.2=<ip>:<port>:<port>
server.3=<ip>:<port>:<port>
啓動 zookeeper
# 啓動 zookeeper
./bin/zkServer.sh start
# 可查看日誌輸出
./bin/zkServer.sh start-foreground
# 連接到 zookeeper 集羣可以 , 分割
./bin/zkCli.sh -server <ip>:<zk-port>
安裝Kafka
- 下載地址: https://kafka.apache.org/downloads
wget http://mirrors.shuosc.org/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz
解壓並修改部分配置
tar -zxvf kafka_2.11-1.0.0.tgz
cd kafka_2.11-1.0.0
vim config/server.properties
broker.id=1
listeners=PLAINTEXT://:<default-port:9092>
advertised.listeners=PLAINTEXT://<ip>:<default-port:9092>
log.dirs=/tmp/kafka-logs
# 在kafka中執行 zookeeper 指令
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
# 啓動kafka服務
bin/kafka-server-start.sh config/server.properties
# 創建 topic
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic devtopic
# 查詢 topic
bin/kafka-topics.sh --list --zookeeper localhost:2181
# 查看 topic 信息
bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic devtopic
# shell 端生產者
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic devtopic
# shell 端消費者
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic devtopic
本文基於原作者: https://blog.csdn.net/u010889616/article/details/80641922
在 Spring Boot 中使用 kafka
生產者和消費者都導入 spring-kafka
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
生產者配置
spring.kafka.producer.bootstrap-servers=<ip>:<zk-port>
spring.kafka.producer.retries=1
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
com.kafka.kafkacomponent.topic=devtopic
編寫 kafka 發送消息的組件
@Component
public class KafkaComponent {
@Value("${com.kafka.kafkacomponent.topic}")
private String topic;
@Autowired
private KafkaTemplate<String, Object> kafkaTemplate;
public void send(String body) {
kafkaTemplate.send(topic, body);
}
}
編寫 Controller 發送消息
@RestController
@SpringBootApplication
public class SpringbootKafkaProducerApplication {
public static void main(String[] args) {
SpringApplication.run(SpringbootKafkaProducerApplication.class, args);
}
@Autowired
private KafkaComponent kafkaComponent;
@GetMapping("kafka")
public void kafka(String data) {
kafkaComponent.send(data);
}
}
消費者配置
spring.kafka.consumer.bootstrap-servers=<ip>:<zk-port>
spring.kafka.consumer.group-id=0
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
#=======set comsumer max fetch.byte 2*1024*1024=============
spring.kafka.consumer.properties.max.partition.fetch.bytes=2097152
com.kafka.kafkaconsumerlistener.topic=devtopic
創建方法監聽
@Component
public class KafkaConsumerListener {
@KafkaListener(topics = { "${com.kafka.kafkaconsumerlistener.topic}" })
public void listen(ConsumerRecord<?, ?> record) {
System.out.println(record.value());
}
}