最近一直在搞springboot整合kafka,於是自己搭建了一套單機的kafka環境,以便用於測試
環境搭建
1.下載解壓kafka_2.11-1.1.0.tgz,創建移動到kafka文件夾中
wget http://archive.apache.org/dist/kafka/1.1.0/kafka_2.11-1.1.0.tgz
tar -xzvf kafka_2.11-1.1.0.tgz
2.kafka需要安裝zookeeper使用,但kafka集成zookeeper,在單機搭建時可直接使用,只需配置kafka_2.11-1.1.0/config 下的“zookeeper.properties”
以下是修改的zookeeper.properties配置參數:
#創建zookeeper目錄
mkdir /usr/local/kafka/zookeeper
#創建zookeeper日誌目錄
mkdir -p /usr/local/kafka/log/zookeeper
#進入配置目錄
cd /usr/local/kafka/config
vi zookeeper.properties
#zookeeper數據目錄
dataDir=/usr/local/kafka/zookeeper
#zookeeper日誌目錄
dataLogDir=/usr/local/kafka/log/zookeeper
clientPort=2181
maxClientCnxns=100
tickTime=2000
initLimit=10
syncLimit=5
3.配置kafka_2.11-1.1.0/config下的“server.properties”,修改log.dirs和zookeeper.connect。前者是日誌存放文件夾,後者是zookeeper連接地址(端口和clientPort保持一致),目錄創建及參數配置如下:
#創建kafka日誌目錄
mkdir /usr/local/kafka/log/kafka
#進入配置目錄
cd /usr/local/kafka/config
#編輯修改kafka相應的參數
vi server.properties
broker.id=0
#topic可以刪,默認是false
delete.topic.enable=true
#端口號,可不配置
port=9092
#服務器IP地址,也可修改爲自己的服務器IP
host.name=192.254.64.128
#日誌存放路徑,上面創建的目錄
log.dirs=/usr/local/kafka/log/kafka
#zookeeper地址和端口,單機配置部署,localhost:2181
zookeeper.connect=192.254.64.128:2181
listeners=PLAINTEXT://192.254.64.128:9092
到此,kafka的單機環境就搭建成功了。
啓動命令
啓動zookeeper
/usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &
啓動啓動kakfa
/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
-daemon:這個比較關鍵需要注意,守護線程,要不然xshell一關進程就丟了
jps進程顯示如下:
shell命令
./kafka-topics.sh --zookeeper 192.168.226.129:2182 --describe --topic orderTopic ;查看名字爲orderTopic的topic
./kafka-topics.sh --zookeeper 192.168.226.129:2182 --list ;查看topic 列表
記一段問題:(kafka丟失最開始日誌消息的問題)
props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, address);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 1000);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 100000);
props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG,110000);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "testconsumer"+System.currentTimeMillis());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 50);
public static Map<String, Object> consume(String topics, Map<String, Object> rtnMap) {
if (consumer == null) {
consumer = getConsumer();
}
consumer.subscribe(Collections.singletonList(topics));
StringBuffer sb = new StringBuffer();
ConsumerRecords<String, String> records = null;
records = consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, String> record : records) {
sb.append(record.value());
}
if(StringUtils.isNotEmpty(sb.toString())){
consumer.commitSync();
//consumer.close();
}
rtnMap.put("msg", sb.toString());
return rtnMap;
}
records = consumer.poll(Duration.ofMillis(1000)); 留意一下就好,遇到具體問題具體對待