Kafka入門之命令行操作

1.創建topic

[root@node01 kafka]$ bin/kafka-topics.sh --zookeeper node01:2181,node02:2181,node03:2181 --create --replication-factor 3 --partitions 3 --topic test

# CDH版本

kafka-topics --zookeeper cm1:2181,cm2:2181,cm3:2181 --create --replication-factor 3 --partitions 3 --topic test

參數說明:

  • –replication-factor:指定每個分區的副本個數,默認1個(指定的副本數包含自身1個)

  • –partitions:指定當前創建的kafka分區數量,默認爲1個

  • –topic:指定新建topic的名稱)

2.查看當前服務器中的所有topic

[root@node01 kafka]$ bin/kafka-topics.sh --zookeeper node01:2181,node02:2181,node03:2181 --list
test

# CDH版本

kafka-topics --zookeeper cm1:2181,cm2:2181,cm3:2181 --list

3.刪除topic

[root@node01 kafka]$ bin/kafka-topics.sh --zookeeper node01:2181,node02:2181,node03:2181 --delete --topic test

# CDH版本

kafka-topics --zookeeper cm1:2181,cm2:2181,cm3:2181 --delete --topic test

在kafka集羣中刪除topic,當前topic只是被標記成刪除。

標記爲刪除還可以讀取數據,只是會在一週後自動刪除所有數據

要想執行刪除命令後直接刪除,則需要在config/server.properties配置delete.topic.enable=true

4.創建生產者發送消息

[root@node01 kafka]$ bin/kafka-console-producer.sh --broker-list node01:9092,node02:9092,node03:9092 --topic test
>hello world
>cw  cw

5.創建消費者消費消息

# kafka0.9之前,指定zookeeper
[root@node01 kafka]$ bin/kafka-console-consumer.sh \
--zookeeper node01:2181,node02:2181,node03:2181 --topic test

# kafka0.9及之後,指定的是kafka集羣
[root@node01 kafka]$ bin/kafka-console-consumer.sh \
--bootstrap-server node01:9092,node02:9092,node03:9092 --topic test

[root@node01 kafka]$ bin/kafka-console-consumer.sh \
--bootstrap-server node01:9092,node02:9092,node03:9092 --from-beginning --topic test

--bootstrap-server cm1:9092,cm2:9092,cm3:9092
# 在輸入信息時,想要刪除需要按ctrl+backspace

–from-beginning:會把主題中以往所有的數據都讀取出來。

此時因爲分爲了三個分區,所以只能保證每個分區內部是有序的,分區之間無法保證有序

此時如果生產者繼續添加數據,消費者將按照添加順序顯示,因爲此時consumer正在開啓,producer生產一個它就消費一個,所以此時順序能保證有序

單分區有序,多分區無序

6.查看某個Topic的詳情

[root@node01 kafka]$ bin/kafka-topics.sh --zookeeper node01:2181,node02:2181,node03:2181 --describe --topic test
Topic:test	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test	Partition: 0	Leader: 2	Replicas: 2,1,0	Isr: 2,1,0
	Topic: test	Partition: 1	Leader: 0	Replicas: 0,2,1	Isr: 0,2,1
	Topic: test	Partition: 2	Leader: 1	Replicas: 1,0,2	Isr: 1,0,2
	
# CDH版本命令
$ kafka-topics --zookeeper cm1:2181,cm2:2181,cm3:2181 --describe --topic test
  • Partition:所在分區

  • Leader:由哪個leader管理

  • Replication:副本所在位置(順序就是優先副本的順序)

  • Isr:檢查數據的完整性

7.修改分區數

注意:topic的分區數只能增加,不能減少,因爲partition中已經有數據了

[root@node01 kafka]$ bin/kafka-topics.sh --zookeeper node01:2181,node02:2181,node03:2181 --alter --topic test --partitions 6

8.在zookeeper上查看消費者和broker的消息

# 啓動zookeeper客戶端:
./zkCli.sh
# 查看topic相關信息:
[zk: localhost:2181(CONNECTED) 0] ls /brokers/topics
[test]
[zk: localhost:2181(CONNECTED) 1] ls /brokers/topics/test
[partitions]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/topics/test/partitions
[0]
[zk: localhost:2181(CONNECTED) 3] ls /brokers/topics/test/partitions/0
[state]
[zk: localhost:2181(CONNECTED) 4] ls /brokers/topics/test/partitions/0/state
[]
[zk: localhost:2181(CONNECTED) 5] ls
[zk: localhost:2181(CONNECTED) 6] get /brokers/topics/test/partitions/0/state
{"controller_epoch":1,"leader":1,"version":1,"leader_epoch":0,"isr":[1]}
cZxid = 0x4f00000113
ctime = Sun Jul 28 17:17:33 CST 2019
mZxid = 0x4f00000113
mtime = Sun Jul 28 17:17:33 CST 2019
pZxid = 0x4f00000113
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 72
numChildren = 0

# 查看消費者相關信息:
[zk: localhost:2181(CONNECTED) 7] ls /consumers
[console-consumer-21115]
[zk: localhost:2181(CONNECTED) 8] ls /consumers/console-consumer-21115
[ids, owners, offsets]
[zk: localhost:2181(CONNECTED) 9] ls /consumers/console-consumer-21115/offsets
[test]
[zk: localhost:2181(CONNECTED) 10] ls /consumers/console-consumer-21115/offsets/test
[0]
[zk: localhost:2181(CONNECTED) 11] ls /consumers/console-consumer-21115/offsets/test/0
[]
[zk: localhost:2181(CONNECTED) 12] get /consumers/console-consumer-21115/offsets/test/0
2
cZxid = 0x4f0000011e
ctime = Sun Jul 28 17:18:23 CST 2019
mZxid = 0x4f0000011e
mtime = Sun Jul 28 17:18:23 CST 2019
pZxid = 0x4f0000011e
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 1
numChildren = 0

9.如何完全刪除kafka中的數據

  1. 在kafka集羣中刪除topic,當前topic被標記成刪除。

    標記爲刪除還可以讀取數據,只是會在一週後自動刪除所有數據

    開啓配置項:0.8版本中 執行刪除命令後會直接刪除。

    在config/server.properties配置delete.topic.enable=true

    [root@node01 ~]# kafka-topics.sh --zookeeper node01:2181,node02:2181,node03:2181 --list
    test
    [root@node01 ~]# kafka-topics.sh --zookeeper node01:2181,node02:2181,node03:2181 --delete --topic test
    Topic test is marked for deletion.
    Note: This will have no impact if delete.topic.enable is not set to true.
    
  2. 在每臺broker節點上刪除當前這個topic對應的真實數據。

    [root@node01 ~]# cd /opt/module/kafka/logs
    [root@node01 logs]# ls
    cleaner-offset-checkpoint  log-start-offset-checkpoint  meta.properties  recovery-point-offset-checkpoint  test-0  test-1  test-2  replication-offset-checkpoint
    [root@node01 logs]# rm -rf ./test-0
    [root@node01 logs]# rm -rf ./test-1
    [root@node01 logs]# rm -rf ./test-2
    
    [root@node02 logs]# rm -rf ./test-0
    [root@node02 logs]# rm -rf ./test-1
    [root@node02 logs]# rm -rf ./test-2
    
    [root@node03 logs]# rm -rf ./test-0
    [root@node03 logs]# rm -rf ./test-1
    [root@node03 logs]# rm -rf ./test-2
    
  3. 進入zookeeper客戶端,刪除topic信息

    [root@node01 logs]# zkCli.sh 
    [zk: localhost:2181(CONNECTED) 0] ls /brokers/topics
    [test]
    [zk: localhost:2181(CONNECTED) 1] rmr /brokers/topics/test
    [zk: localhost:2181(CONNECTED) 2] ls /brokers/topics     
    []
    
  4. 刪除config中的元數據消息

    [zk: localhost:2181(CONNECTED) 5] ls /config/topics
    [test]
    [zk: localhost:2181(CONNECTED) 6] rmr /config/topics/test
    [zk: localhost:2181(CONNECTED) 7] ls /config/topics     
    []
    
  5. 刪除zookeeper中被標記爲刪除的topic信息

    [zk: localhost:2181(CONNECTED) 9] rmr /admin/delete_topics/test
    

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章