Kafka(四)—— Kafka集羣部署

1. 環境準備

集羣規劃

bigdata11 bigdata12 bigdata13
zk zk zk
kafka kafka kafka

1.1 虛擬機準備

  1. 準備3臺虛擬機

  2. 配置ip地址

  3. 配置主機名稱

  4. 3臺主機分別關閉防火牆

    [root@bigdata11 during]# chkconfig iptables off
    [root@bigdata12 during]# chkconfig iptables off
    [root@bigdata13 during]# chkconfig iptables off
    

1.2 安裝JDK

1.3 安裝Zookeeper

2. Kafka集羣部署

  1. 解壓安裝包並重命名

    下載:http://kafka.apache.org/downloads.html

    # 解壓
    tar -zxvf kafka_2.11-0.11.0.2.tgz -C /usr/local/
    
    # 修改解壓後的文件名稱
    mv kafka_2.11-0.11.0.2/ kafka
    
  2. 在/usr/local/kafka目錄下創建logs文件夾。

    mkdir logs
    
  3. 修改配置文件。

    cd /usr/local/kafka/config
    vim server.properties
    #broker的全局唯一編號,不能重複
    broker.id=0
    
    #是否允許刪除topic
    delete.topic.enable=true
    
    #處理網絡請求的線程數量
    num.network.threads=3
    
    #用來處理磁盤IO的線程數量
    num.io.threads=8
    
    #發送套接字的緩衝區大小
    socket.send.buffer.bytes=102400
    
    #接收套接字的緩衝區大小
    socket.receive.buffer.bytes=102400
    
    #請求套接字的最大緩衝區大小
    socket.request.max.bytes=104857600
    
    #kafka運行日誌存放的路徑
    log.dirs=/usr/local/kafka/logs
    
    #topic在當前broker上的分區個數
    num.partitions=1
    
    #用來恢復和清理data下數據的線程數量
    num.recovery.threads.per.data.dir=1
    
    #segment文件保留的最長時間,超時將被刪除
    log.retention.hours=168
    
    #配置連接Zookeeper集羣地址
    zookeeper.connect=bigdata11:2181,bigdata12:2181,bigdata13:2181
    
  4. 配置環境變量。

    vi /etc/profile
    export KAFKA_HOME=/usr/local/kafka
    export PATH=$PATH:$KAFKA_HOME/bin
    source /etc/profile
    
  5. 分發profile文件和kafka安裝包到另外兩個節點bigdata12和bigdata13上。

    # 分發profile
    cd /etc/
    xsync profile
    
    # 分發kafka
    cd /usr/local
    xsync kafka
    
  6. 分別在bigdata12和bigdata13上修改配置文件/usr/local/kafka/config/server.properties中的broker.id。

    bigdata12中:broker.id=1

    bigdata13中:broker.id=2

    注:broker.id不得重複

  7. 啓動集羣。

    依次在bigdata11、bigdata12、bigdata13節點上啓動kafka(加上& ,是在後臺啓動)

    [root@bigdata11 kafka]$ bin/kafka-server-start.sh config/server.properties &
    [root@bigdata12 kafka]$ bin/kafka-server-start.sh config/server.properties &
    [root@bigdata13 kafka]$ bin/kafka-server-start.sh config/server.properties &
    
  8. 關閉集羣。

    [root@bigdata11 kafka]$ bin/kafka-server-stop.sh stop
    [root@bigdata12 kafka]$ bin/kafka-server-stop.sh stop
    [root@bigdata13 kafka]$ bin/kafka-server-stop.sh stop 
    

3. 命令行操作

# 查看當前所有的topic
./bin/kafka-topics.sh --zookeeper node3:2181 --list

# 查看某個topic的詳情
./bin/kafka-topics.sh --zookeeper node3:2181 --describe --topic topic0908 

# 創建topic
./bin/kafka-topics.sh --zookeeper node3:2181 --create --replication-factor 3 --partitions 1 --topic topic0908

# 刪除topic,需要server.properties中設置delete.topic.enable=true否則只是標記刪除或者直接重啓。
./bin/kafka-topics.sh --zookeeper node3:2181 --delete --topic second

# 消費消息:
./bin/kafka-console-consumer.sh --bootstrap-server node3:9092 --from-beginning --topic topic0908

# 發送消息:
./bin/kafka-console-producer.sh --broker-list node3:9092 --topic topic0908

# 消費者組消費:

# 修改consumer.properties 配置文件
group.id=during
 
# 生產者:
./bin/kafka-console-producer.sh --broker-list node3:9092 --topic topic0908
 
# 消費者組:
./bin/kafka-console-consumer.sh --bootstrap-server node3:9092 --from-beginning --topic topic0908 --consumer.config config/consumer.properties

4. 配置

只翻譯了常用的參數,還望見諒Ծ‸Ծ

4.1 Broker配置信息

屬性 默認值 描述
broker.id 必填參數,broker的唯一標識
log.dirs /tmp/kafka-logs Kafka數據存放的目錄。可以指定多個目錄,中間用逗號分隔,當新partition被創建的時會被存放到當前存放partition最少的目錄。
port 9092 BrokerServer接受客戶端連接的端口號
zookeeper.connect null Zookeeper的連接串,格式爲:hostname1:port1,hostname2:port2,hostname3:port3。可以填一個或多個,爲了提高可靠性,建議都填上。注意,此配置允許我們指定一個zookeeper路徑來存放此kafka集羣的所有數據,爲了與其他應用集羣區分開,建議在此配置中指定本集羣存放目錄,格式爲:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消費者的參數要和此參數一致。
message.max.bytes 1000000 服務器可以接收到的最大的消息大小。注意此參數要和consumer的maximum.message.size大小一致,否則會因爲生產者生產的消息太大導致消費者無法消費。
num.io.threads 8 服務器用來執行讀寫請求的IO線程數,此參數的數量至少要等於服務器上磁盤的數量。
queued.max.requests 500 I/O線程可以處理請求的隊列大小,若實際請求數超過此大小,網絡線程將停止接收新的請求。
socket.send.buffer.bytes 100 * 1024 The SO_SNDBUFF buffer the server prefers for socket connections.
socket.receive.buffer.bytes 100 * 1024 The SO_RCVBUFF buffer the server prefers for socket connections.
socket.request.max.bytes 100 * 1024 * 1024 服務器允許請求的最大值, 用來防止內存溢出,其值應該小於 Java heap size.
num.partitions 1 默認partition數量,如果topic在創建時沒有指定partition數量,默認使用此值,建議改爲5
log.segment.bytes 1024 * 1024 * 1024 Segment文件的大小,超過此值將會自動新建一個segment,此值可以被topic級別的參數覆蓋。
log.roll.{ms,hours} 24 * 7 hours 新建segment文件的時間,此值可以被topic級別的參數覆蓋。
log.retention.{ms,minutes,hours} 7 days Kafka segment log的保存週期,保存週期超過此時間日誌就會被刪除。此參數可以被topic級別參數覆蓋。數據量大時,建議減小此值。
log.retention.bytes -1 每個partition的最大容量,若數據量超過此值,partition數據將會被刪除。注意這個參數控制的是每個partition而不是topic。此參數可以被log級別參數覆蓋。
log.retention.check.interval.ms 5 minutes 刪除策略的檢查週期
auto.create.topics.enable true 自動創建topic參數,建議此值設置爲false,嚴格控制topic管理,防止生產者錯寫topic。
default.replication.factor 1 默認副本數量,建議改爲2。
replica.lag.time.max.ms 10000 在此窗口時間內沒有收到follower的fetch請求,leader會將其從ISR(in-sync replicas)中移除。
replica.lag.max.messages 4000 如果replica節點落後leader節點此值大小的消息數量,leader節點就會將其從ISR中移除。
replica.socket.timeout.ms 30 * 1000 replica向leader發送請求的超時時間。
replica.socket.receive.buffer.bytes 64 * 1024 The socket receive buffer for network requests to the leader for replicating data.
replica.fetch.max.bytes 1024 * 1024 The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader.
replica.fetch.wait.max.ms 500 The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader.
num.replica.fetchers 1 Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker.
fetch.purgatory.purge.interval.requests 1000 The purge interval (in number of requests) of the fetch request purgatory.
zookeeper.session.timeout.ms 6000 ZooKeeper session 超時時間。如果在此時間內server沒有向zookeeper發送心跳,zookeeper就會認爲此節點已掛掉。 此值太低導致節點容易被標記死亡;若太高,會導致太遲發現節點死亡。
zookeeper.connection.timeout.ms 6000 客戶端連接zookeeper的超時時間。
zookeeper.sync.time.ms 2000 H ZK follower落後 ZK leader的時間。
controlled.shutdown.enable true 允許broker shutdown。如果啓用,broker在關閉自己之前會把它上面的所有leaders轉移到其它brokers上,建議啓用,增加集羣穩定性。
auto.leader.rebalance.enable true If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available.
leader.imbalance.per.broker.percentage 10 The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker.
leader.imbalance.check.interval.seconds 300 The frequency with which to check for leader imbalance.
offset.metadata.max.bytes 4096 The maximum amount of metadata to allow clients to save with their offsets.
connections.max.idle.ms 600000 Idle connections timeout: the server socket processor threads close the connections that idle more than this.
num.recovery.threads.per.data.dir 1 The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
unclean.leader.election.enable true Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.
delete.topic.enable false 啓用deletetopic參數,建議設置爲true。
offsets.topic.num.partitions 50 The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200).
offsets.topic.retention.minutes 1440 Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic.
offsets.retention.check.interval.ms 600000 The frequency at which the offset manager checks for stale offsets.
offsets.topic.replication.factor 3 The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas.
offsets.topic.segment.bytes 104857600 Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads.
offsets.load.buffer.size 5242880 An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache.
offsets.commit.required.acks -1 The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden.
offsets.commit.timeout.ms 5000 The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout.

4.2 Producer配置信息

屬性 默認值 描述
metadata.broker.list 啓動時producer查詢brokers的列表,可以是集羣中所有brokers的一個子集。注意,這個參數只是用來獲取topic的元信息用,producer會從元信息中挑選合適的broker並與之建立socket連接。格式是:host1:port1,host2:port2。
request.required.acks 0 參見3.2節介紹
request.timeout.ms 10000 Broker等待ack的超時時間,若等待時間超過此值,會返回客戶端錯誤信息。
producer.type sync 同步異步模式。async表示異步,sync表示同步。如果設置成異步模式,可以允許生產者以batch的形式push數據,這樣會極大的提高broker性能,推薦設置爲異步。
serializer.class kafka.serializer.DefaultEncoder 序列號類,.默認序列化成 byte[] 。
key.serializer.class Key的序列化類,默認同上。
partitioner.class kafka.producer.DefaultPartitioner Partition類,默認對key進行hash。
compression.codec none 指定producer消息的壓縮格式,可選參數爲: “none”, “gzip” and “snappy”。
compressed.topics null 啓用壓縮的topic名稱。若上面參數選擇了一個壓縮格式,那麼壓縮僅對本參數指定的topic有效,若本參數爲空,則對所有topic有效。
message.send.max.retries 3 Producer發送失敗時重試次數。若網絡出現問題,可能會導致不斷重試。
retry.backoff.ms 100 Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.
topic.metadata.refresh.interval.ms 600 * 1000 The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available…). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed
queue.buffering.max.ms 5000 啓用異步模式時,producer緩存消息的時間。比如我們設置成1000時,它會緩存1秒的數據再一次發送出去,這樣可以極大的增加broker吞吐量,但也會造成時效性的降低。
queue.buffering.max.messages 10000 採用異步模式時producer buffer 隊列裏最大緩存的消息數量,如果超過這個數值,producer就會阻塞或者丟掉消息。
queue.enqueue.timeout.ms -1 當達到上面參數值時producer阻塞等待的時間。如果值設置爲0,buffer隊列滿時producer不會阻塞,消息直接被丟掉。若值設置爲-1,producer會被阻塞,不會丟消息。
batch.num.messages 200 採用異步模式時,一個batch緩存的消息數量。達到這個數量值時producer纔會發送消息。
send.buffer.bytes 100 * 1024 Socket write buffer size
client.id “” The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.

4.3 Consumer配置信息

屬性 默認值 描述
group.id Consumer的組ID,相同goup.id的consumer屬於同一個組。
zookeeper.connect Consumer的zookeeper連接串,要和broker的配置一致。
consumer.id null 如果不設置會自動生成。
socket.timeout.ms 30 * 1000 網絡請求的socket超時時間。實際超時時間由max.fetch.wait + socket.timeout.ms 確定。
socket.receive.buffer.bytes 64 * 1024 The socket receive buffer for network requests.
fetch.message.max.bytes 1024 * 1024 查詢topic-partition時允許的最大消息大小。consumer會爲每個partition緩存此大小的消息到內存,因此,這個參數可以控制consumer的內存使用量。這個值應該至少比server允許的最大消息大小大,以免producer發送的消息大於consumer允許的消息。
num.consumer.fetchers 1 The number fetcher threads used to fetch data.
auto.commit.enable true 如果此值設置爲true,consumer會週期性的把當前消費的offset值保存到zookeeper。當consumer失敗重啓之後將會使用此值作爲新開始消費的值。
auto.commit.interval.ms 60 * 1000 Consumer提交offset值到zookeeper的週期。
queued.max.message.chunks 2 用來被consumer消費的message chunks 數量, 每個chunk可以緩存fetch.message.max.bytes大小的數據量。
auto.commit.interval.ms 60 * 1000 Consumer提交offset值到zookeeper的週期。
queued.max.message.chunks 2 用來被consumer消費的message chunks 數量, 每個chunk可以緩存fetch.message.max.bytes大小的數據量。
fetch.min.bytes 1 The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.
fetch.wait.max.ms 100 The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes.
rebalance.backoff.ms 2000 Backoff time between retries during rebalance.
refresh.leader.backoff.ms 200 Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.
auto.offset.reset largest What to do when there is no initial offset in ZooKeeper or if an offset is out of range ;smallest : automatically reset the offset to the smallest offset; largest : automatically reset the offset to the largest offset;anything else: throw exception to the consumer
consumer.timeout.ms -1 若在指定時間內沒有消息消費,consumer將會拋出異常。
exclude.internal.topics true Whether messages from internal topics (such as offsets) should be exposed to the consumer.
zookeeper.session.timeout.ms 6000 ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur.
zookeeper.connection.timeout.ms 6000 The max time that the client waits while establishing a connection to zookeeper.
zookeeper.sync.time.ms 2000 How far a ZK follower can be behind a ZK leader

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章