Kafka簡單操作

下面所有操作都是基於前面2片文章搭建的zookeeper和kafka集羣,如果不清楚如何搭建環境,請移步部署3個節點的Zookeeper僞分佈式集羣部署3個節點的Kafka僞分佈式集羣

1、查看現有topic

cd /opt/kafka/
bin/kafka-topics.sh --zookeeper localhost:2181 --list

2、創建一個topic

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
Created topic "my-replicated-topic".
[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-topics.sh --zookeeper localhost:2181 --list
my-replicated-topic

3、查看topic詳情

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-topics.sh --describe --topic my-replicated-topic --zookeeper localhost:2181
Topic:my-replicated-topic	PartitionCount:1	ReplicationFactor:3	Configs:
	Topic: my-replicated-topic	Partition: 0	Leader: 1	Replicas: 1,0,2	Isr: 1,0,2

4、發佈/消費消息

通過kafka內置控制檯的生產消費者來演示

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-console-producer.sh --broker-list 172.19.152.171:9092 --topic my-replicated-topic
[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-console-consumer.sh --bootstrap-server 172.19.152.171:9092 --from-beginning --topic my-replicated-topic

如果使用offset,必須指定partition纔行:

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-console-consumer.sh --bootstrap-server 172.19.152.171:9092 --offset 0 --topic my-replicated-topic --partition 0

5、Kafka Connect

kafka自帶功能,從test.txt文件中讀取每行數據,寫入到名爲connect-test的topic中,然後再輸出到test.sink.txt文件中;

創建test.txt文件,並寫入數據

cd /opt/kafka/
echo -e "hello" > test.txt
echo -e "kafka" > test.txt
echo -e "world" > test.txt

修改默認的配置文件,將其中的IP地址更換爲自己服務器的IP地址即可:

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# vim config/connect-standalone.properties

bootstrap.servers=172.19.152.171:9092

啓動connector

bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties

查看生成的test.sink.txt

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# cat test.sink.txt
hello
kafka
world

查看現有topic

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
connect-test
my-replicated-topic

啓動消費者獲取connect-test中所有消息內容

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-console-consumer.sh --bootstrap-server 172.19.152.171:9092 --from-beginning --topic connect-test
{"schema":{"type":"string","optional":false},"payload":"hello"}
{"schema":{"type":"string","optional":false},"payload":"kafka"}
{"schema":{"type":"string","optional":false},"payload":"world"}

此時連接器還在工作,向test.txt中追加內容,可以分別在消費者和test.sink.txt中看到追加內容:

#追加內容
[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# echo -e "message" >> test.txt

#消費者窗口
{"schema":{"type":"string","optional":false},"payload":"message"}

#再次查看test.sink.txt
[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# cat test.sink.txt
hello
kafka
world
message

6、Kafka Stream

6.1 創建輸入topic及輸出topic

創建輸出主題時啓用壓縮,因爲輸出流是更改日誌流;

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-plaintext-input
Created topic "streams-plaintext-input".

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-wordcount-output --config cleanup.policy=compact
Created topic "streams-wordcount-output".

查看是否創建topic成功

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
connect-test
my-replicated-topic
streams-plaintext-input
streams-wordcount-output

6.2 啓動wordcount應用程序

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo
[2019-12-16 12:56:05,001] WARN The configuration 'admin.retries' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)

如果按照之前的配置,這一步會報錯,需要將kafka配置文件中的本地ip地址改成localhost

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# vim config/server1.properties
[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# vim config/server2.properties
[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# vim config/server3.properties

listeners=PLAINTEXT://localhost:9092
listeners=PLAINTEXT://localhost:9093
listeners=PLAINTEXT://localhost:9094

6.3 開啓生產者終端

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
>

6.4 開啓消費者終端

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic streams-wordcount-output --from-beginning --formatter kafka.tools.DefaultMessageFormatter --property print.key=true --property print.value=true --property key.desializer=org.apache.kafka.common.serialization.StringDeserializer --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer

6.5 向生產者終端輸入原始數據,經過處理後的數據會在消費者終端輸出

[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
>hello kafka world kafka hello kafka
[root@iZuf66txzmeg2fbo0i8nhkZ kafka]# bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic streams-wordcount-output --from-beginning --formatter kafka.tools.DefaultMessageFormatter --property print.key=true --property print.value=true --property key.desializer=org.apache.kafka.common.serialization.StringDeserializer --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer

hello    2
kafka    3
world    1

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章