[Kafka]命令行操作

上篇文章[Kafka]安裝與部署中,搭建了一個三節點的Kafka集羣環境。
這篇文章來談談如何使用命令行操作集羣。

topic創建

創建一個名爲test01的topic

bin/kafka-topics.sh \
--create \
--zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
--replication-factor 2 \
--partitions 3 \
--topic test01

--zookeeper zookeeper的連接地址
--replication-factor num 指定數據副本的數量爲num
--partitions num 指定該topic有幾個主分片
--topic name 指定topic的名稱

topic查詢

查看所有topic

[app@test13 kafka]$ bin/kafka-topics.sh --list --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181
test
test01
test02

該命令可以查看所有zookeeper中的topic。

查看指定topic明細

bin/kafka-topics.sh \
--describe \
--zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
--topic test01

查看test01的topic的詳細信息:

[app@test13 kafka]$ bin/kafka-topics.sh \
> --describe \
> --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
> --topic test01

Topic:test01	PartitionCount:3	ReplicationFactor:2	Configs:
	Topic: test01	Partition: 0	Leader: 1	Replicas: 1,2	Isr: 1,2
	Topic: test01	Partition: 1	Leader: 0	Replicas: 2,0	Isr: 0,2
	Topic: test01	Partition: 2	Leader: 0	Replicas: 0,1	Isr: 0,1

簡單解釋一下輸出,topic是消息隊列名,PartitionCount是分片數量其實也就是數據分佈在幾個broker上面,ReplicationFactor是副本數量。
具體的信息輸出顯示了有哪些分片,存在於哪個broker上面,該分片上還有哪些副本,ISR這個集合中的所有節點都是存活狀態,並且跟leader同步。

topic修改

可進行修改分區數量的操作,也就是指定partitions。
先看一下原來的topic信息:

[app@test13 kafka]$ bin/kafka-topics.sh \
> --describe \
> --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
> --topic test
Topic:test	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: test	Partition: 0	Leader: 0	Replicas: 0,1,2	Isr: 0,1,2
	Topic: test	Partition: 1	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0
	Topic: test	Partition: 2	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1

現在進行修改,將分片數改爲2:

[app@test13 kafka]$ bin/kafka-topics.sh \
>     --alter  \
>     --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
>     --topic test --partitions 2
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Error while executing topic command : The number of partitions for a topic can only be increased. Topic test currently has 3 partitions, 2 would not be an increase.
[2019-11-12 16:11:32,686] ERROR org.apache.kafka.common.errors.InvalidPartitionsException: The number of partitions for a topic can only be increased. Topic test currently has 3 partitions, 2 would not be an increase.
 (kafka.admin.TopicCommand$)

從上面執行結果看,失敗了,原因是分區數只能增加不能減少。這種場景一般適用於加了幾臺broker,然後進行分區擴容。
這裏修改一下partition數量,修改爲4:

[app@test13 kafka]$ bin/kafka-topics.sh     --alter      --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181     --topic test --partitions 4
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!

再查看具體信息:

[app@test13 kafka]$ bin/kafka-topics.sh --describe --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 --topic test
Topic:test	PartitionCount:4	ReplicationFactor:3	Configs:
	Topic: test	Partition: 0	Leader: 0	Replicas: 0,1,2	Isr: 0,1,2
	Topic: test	Partition: 1	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0
	Topic: test	Partition: 2	Leader: 2	Replicas: 2,0,1	Isr: 2,0,1
	Topic: test	Partition: 3	Leader: 0	Replicas: 0,2,1	Isr: 0,2,1

topic刪除

原本的集羣中有兩個topic——test01和test02:

[app@test13 kafka]$ bin/kafka-topics.sh --list --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181
test01
test02

現在刪除名爲test01的topic:

[app@test13 kafka]$ bin/kafka-topics.sh  --delete --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 --topic test01
Topic test01 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
[app@test13 kafka]$ bin/kafka-topics.sh --list --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181
test02
[app@test13 kafka]$ 

這裏刪除是先標記爲刪除,由kafka內部 scheduled for deletion

生產者

kafka-console-producer.sh命令使用:

This tool helps to read data from standard input and publish it to Kafka.
Option                                   Description                            
------                                   -----------                            
--batch-size <Integer: size>             Number of messages to send in a single 
                                           batch if they are not being sent     
                                           synchronously. (default: 200)        
--broker-list <String: broker-list>      REQUIRED: The broker list string in    
                                           the form HOST1:PORT1,HOST2:PORT2.    
--compression-codec [String:             The compression codec: either 'none',  
  compression-codec]                       'gzip', 'snappy', 'lz4', or 'zstd'.  
                                           If specified without value, then it  
                                           defaults to 'gzip'                   
--help                                   Print usage information.               
--line-reader <String: reader_class>     The class name of the class to use for 
                                           reading lines from standard in. By   
                                           default each line is read as a       
                                           separate message. (default: kafka.   
                                           tools.                               
                                           ConsoleProducer$LineMessageReader)   
--max-block-ms <Long: max block on       The max time that the producer will    
  send>                                    block for during a send request      
                                           (default: 60000)                     
--max-memory-bytes <Long: total memory   The total memory used by the producer  
  in bytes>                                to buffer records waiting to be sent 
                                           to the server. (default: 33554432)   
--max-partition-memory-bytes <Long:      The buffer size allocated for a        
  memory in bytes per partition>           partition. When records are received 
                                           which are smaller than this size the 
                                           producer will attempt to             
                                           optimistically group them together   
                                           until this size is reached.          
                                           (default: 16384)                     
--message-send-max-retries <Integer>     Brokers can fail receiving the message 
                                           for multiple reasons, and being      
                                           unavailable transiently is just one  
                                           of them. This property specifies the 
                                           number of retires before the         
                                           producer give up and drop this       
                                           message. (default: 3)                
--metadata-expiry-ms <Long: metadata     The period of time in milliseconds     
  expiration interval>                     after which we force a refresh of    
                                           metadata even if we haven't seen any 
                                           leadership changes. (default: 300000)
--producer-property <String:             A mechanism to pass user-defined       
  producer_prop>                           properties in the form key=value to  
                                           the producer.                        
--producer.config <String: config file>  Producer config properties file. Note  
                                           that [producer-property] takes       
                                           precedence over this config.         
--property <String: prop>                A mechanism to pass user-defined       
                                           properties in the form key=value to  
                                           the message reader. This allows      
                                           custom configuration for a user-     
                                           defined message reader.              
--request-required-acks <String:         The required acks of the producer      
  request required acks>                   requests (default: 1)                
--request-timeout-ms <Integer: request   The ack timeout of the producer        
  timeout ms>                              requests. Value must be non-negative 
                                           and non-zero (default: 1500)         
--retry-backoff-ms <Integer>             Before each retry, the producer        
                                           refreshes the metadata of relevant   
                                           topics. Since leader election takes  
                                           a bit of time, this property         
                                           specifies the amount of time that    
                                           the producer waits before refreshing 
                                           the metadata. (default: 100)         
--socket-buffer-size <Integer: size>     The size of the tcp RECV size.         
                                           (default: 102400)                    
--sync                                   If set message send requests to the    
                                           brokers are synchronously, one at a  
                                           time as they arrive.                 
--timeout <Integer: timeout_ms>          If set and the producer is running in  
                                           asynchronous mode, this gives the    
                                           maximum amount of time a message     
                                           will queue awaiting sufficient batch 
                                           size. The value is given in ms.      
                                           (default: 1000)                      
--topic <String: topic>                  REQUIRED: The topic id to produce      
                                           messages to.                         
--version                                Display Kafka version.  

先創建一個topic名爲test01的消息隊列,然後開啓生產者:

bin/kafka-console-producer.sh --broker-list 192.168.133.15:9092 --topic test01

消費者

[app@node14 kafka]$ bin/kafka-console-consumer.sh --help
This tool helps to read data from Kafka topics and outputs it to standard output.
Option                                   Description                            
------                                   -----------                            
--bootstrap-server <String: server to    REQUIRED: The server(s) to connect to. 
  connect to>                                                                   
--consumer-property <String:             A mechanism to pass user-defined       
  consumer_prop>                           properties in the form key=value to  
                                           the consumer.                        
--consumer.config <String: config file>  Consumer config properties file. Note  
                                           that [consumer-property] takes       
                                           precedence over this config.         
--enable-systest-events                  Log lifecycle events of the consumer   
                                           in addition to logging consumed      
                                           messages. (This is specific for      
                                           system tests.)                       
--formatter <String: class>              The name of a class to use for         
                                           formatting kafka messages for        
                                           display. (default: kafka.tools.      
                                           DefaultMessageFormatter)             
--from-beginning                         If the consumer does not already have  
                                           an established offset to consume     
                                           from, start with the earliest        
                                           message present in the log rather    
                                           than the latest message.             
--group <String: consumer group id>      The consumer group id of the consumer. 
--help                                   Print usage information.               
--isolation-level <String>               Set to read_committed in order to      
                                           filter out transactional messages    
                                           which are not committed. Set to      
                                           read_uncommittedto read all          
                                           messages. (default: read_uncommitted)
--key-deserializer <String:                                                     
  deserializer for key>                                                         
--max-messages <Integer: num_messages>   The maximum number of messages to      
                                           consume before exiting. If not set,  
                                           consumption is continual.            
--offset <String: consume offset>        The offset id to consume from (a non-  
                                           negative number), or 'earliest'      
                                           which means from beginning, or       
                                           'latest' which means from end        
                                           (default: latest)                    
--partition <Integer: partition>         The partition to consume from.         
                                           Consumption starts from the end of   
                                           the partition unless '--offset' is   
                                           specified.                           
--property <String: prop>                The properties to initialize the       
                                           message formatter. Default           
                                           properties include:                  
                                         	print.timestamp=true|false            
                                         	print.key=true|false                  
                                         	print.value=true|false                
                                         	key.separator=<key.separator>         
                                         	line.separator=<line.separator>       
                                         	key.deserializer=<key.deserializer>   
                                         	value.deserializer=<value.            
                                           deserializer>                        
                                         Users can also pass in customized      
                                           properties for their formatter; more 
                                           specifically, users can pass in      
                                           properties keyed with 'key.          
                                           deserializer.' and 'value.           
                                           deserializer.' prefixes to configure 
                                           their deserializers.                 
--skip-message-on-error                  If there is an error when processing a 
                                           message, skip it instead of halt.    
--timeout-ms <Integer: timeout_ms>       If specified, exit if no message is    
                                           available for consumption for the    
                                           specified interval.                  
--topic <String: topic>                  The topic id to consume on.            
--value-deserializer <String:                                                   
  deserializer for values>                                                      
--version                                Display Kafka version.                 
--whitelist <String: whitelist>          Regular expression specifying          
                                           whitelist of topics to include for   
                                           consumption.                         

這裏開啓兩個消費者(192.168.133.13,192.168.133.14),在一個消費者組內:

[app@node14 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server 192.168.133.13:9092 --group hello  --topic test01

1.發送消息

在這裏插入圖片描述
2.接收消息
在192.168.133.13上面接收到以下消息:
在這裏插入圖片描述

在192.168.133.14上面接收到以下消息:
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章