Kafka 生產環境部署指南

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"1 Kafka 基本概念和系統架構","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 Kafka 集羣中存在以下幾種節點角色:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Producer","attrs":{}},{"type":"text","text":":生產者,生產消息並推送到 Kafka 集羣中。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Consumer","attrs":{}},{"type":"text","text":":消費者,從 Kafka 集羣中拉取並消費消息。可以將一個和多個 Consumer 指定爲一個 Consumer Group(消費者組),一個消費者組在邏輯上是一個訂閱者,不同消費者組之間可以消費相同的數據,消費者組之間互不干擾。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Broker","attrs":{}},{"type":"text","text":":一臺 Kafka 服務器就是一個 Broker,一個 Kafka 集羣由多個 Broker 組成。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Controller","attrs":{}},{"type":"text","text":":Kafka 集羣中的其中一臺 Broker,負責集羣中的成員管理和 Topic 管理。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Zookeeper","attrs":{}},{"type":"text","text":":Kafka 集羣通過外部的 Zookeeper 來協調管理節點角色,存儲集羣的元數據信息。不過在 Kafka 2.8 版本開始可以不用 Zookeeper 作爲依賴組件了,官網把這種模式稱爲 KRaft 模式,Kafka 使用的內置共識機制進行集羣選舉並且將元數據信息保存在 Kafka 集羣中。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/28/28b9bda82b84c1d5a3d88217b48ddf71.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 Kafka 中,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"副本(Replica)","attrs":{}},{"type":"text","text":" 分成兩類:領導者副本(Leader Replica)和追隨者副本(Follower Replica)。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Leader Replica","attrs":{}},{"type":"text","text":":每個分區在創建時都要選舉一個副本,稱爲 Leader 副本,其餘的副本稱爲 Follower 副本。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Follower Replica","attrs":{}},{"type":"text","text":":從 Leader 副本中實時同步數據,當 Leader 副本發生故障時,某個 Follower 副本會提升爲 Leader。在 Kafka 中,Follower 副本是不對外提供服務的。也就是說,只有 Leader 副本纔可以響應消費者和生產者的讀寫請求。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/5f/5f8793802110e17499583629e404c637.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Record","attrs":{}},{"type":"text","text":":Kafka 是消息引擎,這裏的消息就是指 Kafka 處理的主要對象。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Topic","attrs":{}},{"type":"text","text":":主題是承載消息的邏輯容器,在實際使用中多用來區分具體的業務。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Consumer Offset","attrs":{}},{"type":"text","text":":消費者位移,表示消費者的消費進度,每個消費者都有自己的消費者位移。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Rebalance","attrs":{}},{"type":"text","text":":重平衡,消費者組內某個消費者實例掛掉後,其他消費者實例自動重新分配訂閱主題分區的過程。Rebalance 是 Kafka 消費者端實現高可用的重要手段。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/3d/3d3b514f54e0ba82fd3b032014809291.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2 集羣容量預估","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"假設 Kafka 集羣每日需要承載 10 億條數據,每條數據的大小大概是 10 KB,一天的數據總量約等於 10 TB。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了數據的可靠性保證,我們設置 3 副本,每天的數據量爲 10 * 3 = 30 TB。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 支持數據的壓縮,假設壓縮比是 0.75,那麼我們每天實際需要的存儲空間是 30 * 0.75 = 22.5 TB。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通常情況下我們會在 Kafka 中保留 7 天的數據,方便在出現問題時回溯重新消費,那麼保存 7 天數據需要的存儲空間是 22.5 * 7 = 157.5 TB。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一般情況下 Kafka 集羣除了消息數據還有其他類型的數據,比如索引數據等,故我們再爲這些數據預留出 10% 的磁盤空間,因此最終需要的存儲空間爲 157.5 / 0.9 = 175 TB。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根據二八法則估計,10 億條數據中的 80%(8億)會在一天中的 20%(4.8小時) 的時間中湧入。也就是說一天中的高峯時期 Kafka 集羣需要扛住每秒 (10^9 * 0.8) / (24 * 0.2 * 60 * 60) = 4.6 萬次的併發。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"單臺物理機可以扛住 4 ~ 5 萬的併發,通常建議高峯時期的 QPS 控制在集羣能夠承載的 QPS 的 30% 左右,加上基於高可用的考量,這裏選擇使用 3 臺 Kafka 物理機搭建集羣。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3 臺物理機,總共 175 TB 數據,平均每臺機器 175 / 3 = 58 TB 數據,每臺物理機使用 15 塊 4 TB 的硬盤。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"總之在規劃磁盤容量時你需要考慮下面這幾個元素:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"新增消息數。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"消息留存時間。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"平均消息大小。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"副本數。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"是否啓用壓縮。","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3 資源評估","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1 硬盤","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SSD 固態硬盤比機械硬盤快主要體現在隨機讀寫方面,比如 MySQL 中經常需要對硬盤進行隨機讀寫,就要用到 SSD 固態硬盤。而 Kafka 在寫磁盤的時候是 append-only 順序追加寫入的,而機械硬盤順序讀寫的性能和內存是差不多的,所以對於 Kafka 集羣來說使用機械硬盤就可以了。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"關於磁盤選擇另一個經常討論的話題就是到底是否應該使用磁盤陣列(RAID)。使用 RAID 的兩個主要優勢在於:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"提供冗餘的磁盤存儲空間。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"提供負載均衡。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不過就 Kafka 而言,一方面 Kafka 自己實現了冗餘機制來提供高可靠性;另一方面通過分區的概念,Kafka 也能在軟件層面自行實現負載均衡。因此可以不搭建 RAID,使用普通磁盤組成的存儲空間即可。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.2 內存","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 自身的 JVM 是用不了過多堆內存的,因爲 Kafka 設計就是規避掉用 JVM 對象來保存數據,避免頻繁 Full GC 導致的問題。建議設置 Kafka 的 JVM 堆內存爲 6G,這是業界比較公認的一個合理的值。Kafka 會大量用到 Page Cache 來提升讀寫效率,將剩下的系統內存都作爲 Page Cache 空間。這裏建議最少選擇 64G 內存的服務器,當然如果是 128G 內存那就更好了,這樣可以放更多數據到 Page Cache 中。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.3 CPU","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通常情況下 Kafka 不太佔用 CPU,CPU 不是性能的瓶頸。Kafka 的服務器一般是建議 16 核,當然如果可以給到 32 核那就最好不過了。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.4 網絡","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於 Kafka 這種通過網絡大量進行數據傳輸的框架而言,帶寬特別容易成爲瓶頸。在高峯期每秒湧入 4.6 萬條數據的情況下,每條數據 10 KB,每秒的數據量是 4.6 * 10000 * 10 * 1000 = 460 MB。現在數據中心中的交換機通常是千兆和萬兆帶寬,但是這裏需要注意的是交換機中的千兆和萬兆帶寬的單位是 bit(位),我們剛纔計算的每秒的數據量的單位是 Byte(字節),換算成 bit 需要乘 8,因此每秒的數據量是 460 * 8 = 3680 Mb,所以我們的網絡應該至少是萬兆。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.5 文件系統","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 在生產環境中建議部署在 Linux 操作系統上,根據官網的測試報告,XFS 的性能要強於 ext4,因此生產環境建議使用 XFS 文件系統。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4 系統參數設置","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4.1 文件描述符","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 會讀寫大量的文件並且和客戶端建立大量的 Socket 連接,在 Linux 系統中一切皆文件,這些操作都需要使用大量的文件描述符。默認 Linux 系統只允許每個線程使用 1024 個文件描述符,這對 Kafka 來說顯然是不夠的,因此需要增加線程可以使用的文件描述符到 100000。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"#編輯配置文件 /etc/security/limits.conf (永久生效)\n* - nofile 100000\n\n#命令行執行(當前會話生效)\nulimit -n 100000\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4.2 線程數","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 中主要有以下幾類線程:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 在網絡層採用的是 Reactor 模式,是一種基於事件驅動的模式。其中有 3種線程:","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Acceptor 線程","attrs":{}},{"type":"text","text":":1個接收線程,負責監聽新的連接請求,同時註冊 OP_ACCEPT 事件,將新的連接按照輪詢的方式交給對應的 Processor 線程處理。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Processor 線程","attrs":{}},{"type":"text","text":":N 個 處理器線程,其中每個 Processor 都有自己的 selector,它會向 Acceptor 分配的 SocketChannel 註冊相應的 OP_READ 事件,N 的大小由","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"num.networker.threads","attrs":{}}],"attrs":{}},{"type":"text","text":" 參數決定。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"KafkaRequestHandler 線程","attrs":{}},{"type":"text","text":":M 個 請求處理線程,職責是從 requestChannel 中的 requestQueue 取出 Request,處理以後再將 Response 添加到 requestChannel 中的 ResponseQueue 中。M 的大小由","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"num.io.threads","attrs":{}}],"attrs":{}},{"type":"text","text":" 參數決定;","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":"none"},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/68/68f6594d19b76f2a07be3cf3193f1b3b.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外 Kafka 在後臺還會有一些其他線程:","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"定期清理數據的線程。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Controller 負責感知和管控整個集羣的線程。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"副本同步拉取數據的線程。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們可以通過以下方式修改最大可以使用的線程數。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"#編輯配置文件 /etc/security/limits.conf (永久生效)\n* - nproc 4096\n\n#命令行執行(當前會話生效)\nulimit -u 4096\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4.3 進程可以使用的最大內存映射區域數","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 之所以吞吐量高,其中的一個重要原因就是因爲 Kafka 在 Consumer 讀取日誌文件時使用了 mmap 的方式。mmap 將磁盤文件映射到內存,支持讀和寫,對內存的操作會反映在磁盤文件上。當客戶端讀取 Kafka 日誌文件時,在進行 log 文件讀取的時候直接將 log 文件讀入用戶態進行緩存,繞過了內核態的 Page Cache,避免了內核態和用戶態的切換。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們可以通過以下方式修改進程可以使用的最大內存映射區域數。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"#編輯配置文件 /etc/sysctl.conf (永久生效)\nvm.max_map_count=262144\n編輯完文件後命令行執行 sysctl -p 立即永久生效\n\n#命令行執行\nsysctl -w vm.max_map_count=262144 (當前會話生效)\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4.4 關閉 swap","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 重度使用 Page Cache,如果內存頁 swap 到磁盤中會嚴重影響到 Kafka 的性能。 使用以下命令永久關閉 swap。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"vim"},"content":[{"type":"text","text":"swapoff -a && sed -i '/ swap / s/^\\(.*\\)$/#\\1/g' /etc/fstab\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4.5 JVM 參數","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"雖然 Kafka 的服務器端代碼是使用 Scala 編寫的,但是最終還是編譯成 Class 文件在 JVM 上允許,因此運行 Kafka 之前需要準備好 Java 運行環境。Kafka 自2.0.0版本開始,已經正式摒棄對 Java 7 的支持了,因此至少使用 Java 8。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"進入 ","attrs":{}},{"type":"link","attrs":{"href":"https://www.oracle.com/java/technologies/javase/javase8-archive-downloads.html","title":"","type":null},"content":[{"type":"text","text":"Oracle 官網下載頁面","attrs":{}}]},{"type":"text","text":" 下載 JDK 8 壓縮包。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"編輯 /etc/profile 文件添加以下內容,設置 Java 環境變量,路徑根據實際安裝的位置修改。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"export JAVA_HOME=/software/jdk\nexport PATH=$PATH:$JAVA_HOME/bin\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"編輯 /etc/profile 文件添加以下內容,設置 JVM 環境變量,在 Confluent 官網推薦了以下 GC 調優參數,該參數在 LinkedIn 的大型生產環境中得到過驗證(基於 JDK 1.8 u5)。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"#推薦的 GC 調優參數和 JVM 堆大小設置\nexport KAFKA_HEAP_OPTS=\"-Xms6g -Xmx6g -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20\n -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M\n -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80\"\n\n#後面會用到的環境變量,先提前設置了 \n#Kafka 環境變量\nexport KAFKA_HOME=/usr/local/kafka\nexport PATH=$KAFKA_HOME/bin:$PATH\n\n#JMX 端口,Kafka Eagle 監控會用到\nexport JMX_PORT=\"9999\"\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使環境變量生效。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"source /etc/profile\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以下是 LinkedIn 中的 Kafka 集羣之一在高峯期的統計數據:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"60 Brokers","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"50k Partitions (replication factor 2)","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"800k Messages/sec in","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"300 MBps inbound, 1 GBps + outbound","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在該集羣中所有的 Broker 中 90% 的 GC 暫停時間約爲 21 毫秒,並且它們每秒執行的 Young GC 不到 1 次。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"5 部署 Kafka 集羣","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5.1 機器規劃","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Zookeeper 節點和 Kafka 節點共用同一臺物理機。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"embedcomp","attrs":{"type":"table","data":{"content":"
IP 地址角色
192.168.1.6Kafka Broker,Zookeeper,Kafka Eagle
192.168.1.7Kafka Broker,Zookeeper
192.168.1.8Kafka Broker,Zookeeper
"}}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5.2 下載並解壓安裝包","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本次 Kafka 搭建的版本是 2.7.1,下載地址可以在 ","attrs":{}},{"type":"link","attrs":{"href":"https://kafka.apache.org/downloads","title":"","type":null},"content":[{"type":"text","text":"Kafka 官網下載頁面","attrs":{}}]},{"type":"text","text":" 中找到。將下載好的安裝包解壓到 /usr/local/kafka 目錄。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5.3 部署 Zookeeper","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka 官網提供的壓縮包中包含了 Zookeeper 所需的文件,我們可以直接使用 Kafka 提供的文件來部署 Zookeeper。當然你可以單獨下載 Zookeeper 的安裝包來部署。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"5.3.1 創建相關目錄","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"mkdir -p /usr/local/zk\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"5.3.2 Zookeeper 配置文件","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"編輯 config/zookeeper.properties 文件,3 臺 Zookeeper 節點的配置文件是相同的。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"#ZooKeeper 使用的基本時間單位(毫秒),心跳超時時間是 tickTime 的兩倍\ntickTime=2000\n\n#Leader 和 Follower 初始連接時最多能容忍的最多心跳數(2000 * 10 = 20s)\ninitLimit=10\n\n#Leader 和 Follower 節點之間請求和應答之間能容忍的最多心跳數(2000 * 5 = 10s)\nsyncLimit=5\n\n#數據目錄\ndataDir=/usr/local/zk\n\n#監聽客戶端連接的端口\nclientPort=2181\n\n#最大客戶端連接數\nmaxClientCnxns=60\n\n#集羣信息(服務器編號,服務器地址,Leader-Follower 通信端口,選舉端口)\nserver.1=192.168.1.6:2888:3888\nserver.2=192.168.1.7:2888:3888\nserver.3=192.168.1.8:2888:3888\n\n#不啓動 jetty 管理頁面服務\nadmin.enableServer=false\n\n#運行所有四字指令\n4lw.commands.whitelist=*\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"5.3.3 設置節點 id","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"分別爲 3 臺 Zookeeper 節點設置不同的節點 id。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"#節點 1\necho \"1\" > /usr/local/zk/myid\n\n#節點 2\necho \"2\" > /usr/local/zk/myid\n\n#節點 3\necho \"3\" > /usr/local/zk/myid\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"5.3.4 啓動 Zookeeper","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 3 臺機器上分別使用以下命令啓動 Zookeeper。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"zookeeper-server-start.sh -daemon config/zookeeper.properties \n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"5.3.5 查看 Zookeeper","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用 zookeeper-shell 連接 Zookeeper:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"zookeeper-shell.sh 192.168.1.6:2181\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然後使用以下命令可以看到註冊到 Zookeeper 集羣中的節點信息。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"get /zookeeper/config\nserver.1=192.168.1.6:2888:3888:participant\nserver.2=192.168.1.7:2888:3888:participant\nserver.3=192.168.1.8:2888:3888:participant\nversion=0\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5.4 部署 Kafka","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"5.4.1 Kafka 配置文件","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"編輯 config/server.propertie 文件,每臺 Kafka 節點除了以下配置以外,其餘配置相同:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"broker.id:每個 Broker 的 id 必須唯一,分別設置爲 0,1,2。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"listeners:Kafka Broker 監聽地址和端口。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有以下幾個參數需要注意:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"連接相關參數:","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"listeners","attrs":{}}],"attrs":{}},{"type":"text","text":":Kafka Broker 監聽地址和端口。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"zookeeper.connect","attrs":{}}],"attrs":{}},{"type":"text","text":":Zookeeper 連接信息。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"請求處理相關參數:","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"num.network.threads","attrs":{}}],"attrs":{}},{"type":"text","text":":Broker 用於處理網絡請求的線程數,默認值 3。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"num.io.threads","attrs":{}}],"attrs":{}},{"type":"text","text":":Broker 用於處理 I/O 的線程數,推薦值 8 * 磁盤數,默認值 8.","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"queued.max.requests","attrs":{}}],"attrs":{}},{"type":"text","text":":在網絡線程停止讀取新請求之前,可以排隊等待 I/O 線程處理的最大請求個數,默認值 500。增大","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"queued.max.requests","attrs":{}}],"attrs":{}},{"type":"text","text":" 能夠緩存更多的請求。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"log.dirs","attrs":{}}],"attrs":{}},{"type":"text","text":":數據存放目錄,我們在每臺機器上使用 15 塊硬盤,每塊硬盤單獨掛載一個目錄。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Topic 相關參數:","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"num.partitions","attrs":{}}],"attrs":{}},{"type":"text","text":" Topic 的默認分區數。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"default.replication.factor","attrs":{}}],"attrs":{}},{"type":"text","text":" Topic 中每個分區的默認副本數。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"數據保留相關參數:","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"log.retention.hours","attrs":{}}],"attrs":{}},{"type":"text","text":":最多保留多少小時的數據。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"log.retention.bytes","attrs":{}}],"attrs":{}},{"type":"text","text":":最多保留多少字節的數據。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"############################# Server Basics #############################\n#broker 的 id,必須唯一\nbroker.id=0\n\n############################# Socket Server Settings #############################\n#監聽地址\nlisteners=PLAINTEXT://192.168.1.6:9092\n\n#Broker 用於處理網絡請求的線程數\nnum.network.threads=6\n\n#Broker 用於處理 I/O 的線程數,推薦值 8 * 磁盤數\nnum.io.threads=120\n\n#在網絡線程停止讀取新請求之前,可以排隊等待 I/O 線程處理的最大請求個數\nqueued.max.requests=1000\n\n#socket 發送緩衝區大小\nsocket.send.buffer.bytes=102400\n\n#socket 接收緩衝區大小\nsocket.receive.buffer.bytes=102400\n\n#socket 接收請求的最大值(防止 OOM)\nsocket.request.max.bytes=104857600\n\n\n############################# Log Basics #############################\n\n#數據目錄\nlog.dirs=/data1,/data2,/data3,/data4,/data5,/data6,/data7,/data8,/data9,/data10,/data11,/data12,/data13,/data14,/data15\n\n#清理過期數據線程數\nnum.recovery.threads.per.data.dir=3\n\n#單條消息最大 10 M\nmessage.max.bytes=10485760\n\n############################# Topic Settings #############################\n\n#不允許自動創建 Topic\nauto.create.topics.enable=false\n\n#不允許 Unclean Leader 選舉。\nunclean.leader.election.enable=false\n\n#不允許定期進行 Leader 選舉。\nauto.leader.rebalance.enable=false\n\n#默認分區數\nnum.partitions=3\n\n#默認分區副本數\ndefault.replication.factor=3\n\n#當生產者將 acks 設置爲 \"all\"(或\"-1\")時,此配置指定必須確認寫入的副本的最小數量,才能認爲寫入成功\nmin.insync.replicas=2\n\n#允許刪除主題\ndelete.topic.enable=true\n\n############################# Log Flush Policy #############################\n\n#建議由操作系統使用默認設置執行後臺刷新\n#日誌落盤消息條數閾值\n#log.flush.interval.messages=10000\n#日誌落盤時間間隔\n#log.flush.interval.ms=1000\n#檢查是否達到flush條件間隔\n#log.flush.scheduler.interval.ms=200\n\n############################# Log Retention Policy #############################\n\n#日誌留存時間 7 天\nlog.retention.hours=168\n\n#最多存儲 58TB 數據\nlog.retention.bytes=63771674411008\n \n#日誌文件中每個 segment 的大小爲 1G\nlog.segment.bytes=1073741824\n\n#檢查 segment 文件大小的週期 5 分鐘\nlog.retention.check.interval.ms=300000\n\n#開啓日誌壓縮\nlog.cleaner.enable=true\n\n#日誌壓縮線程數\nlog.cleaner.threads=8\n\n############################# Zookeeper #############################\n\n#Zookeeper 連接參數\nzookeeper.connect=192.168.1.6:2181,192.168.1.7:2181,192.168.1.8:2181\n\n#連接 Zookeeper 的超時時間\nzookeeper.connection.timeout.ms=6000\n\n\n############################# Group Coordinator Settings #############################\n\n#爲了縮短多消費者首次平衡的時間,這段延時期間 10s 內允許更多的消費者加入組\ngroup.initial.rebalance.delay.ms=10000\n\n#心跳超時時間默認 10s,設置成 6s 主要是爲了讓 Coordinator 能夠更快地定位已經掛掉的 Consumer\nsession.timeout.ms = 6s。\n\n#心跳間隔時間,session.timeout.ms >= 3 * heartbeat.interval.ms。\nheartbeat.interval.ms=2s\n\n#最長消費時間 5 分鐘\nmax.poll.interval.ms=300000\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"5.4.2 啓動 Kafka","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用以下命令在後臺啓動 Kafka。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"kafka-server-start.sh -daemon config/server.properties\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"5.4.3 查看 Kafka 集羣","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"#連接 Zookeeper\nzookeeper-shell.sh 127.0.0.1:2181\n\n#查看 Kafka 節點\nls /brokers/ids\n[0, 1, 2]\n\n#查看 Kafka Controller\nget /controller\n{\"version\":1,\"brokerid\":0,\"timestamp\":\"1631005545929\"}\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"6.1 部署 Kafka Eagle 可視化工具(可選)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka Eagle 是一款 Kafka 可視化和管理軟件,支持對多個不同版本的 Kafka 集羣進行管理。Kafka Eagle 可以監控 Kafka 集羣的健康狀態,消費者組的消費情況,創建和刪除 Topic,支持使用 KSQL 對 Kafka 消息做 Ad-hoc 查詢,支持多種告警方式等等。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/91/91406823b085be8d00b3f04daeff8bab.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"6.1.2 下載並解壓安裝包","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 ","attrs":{}},{"type":"link","attrs":{"href":"http://download.kafka-eagle.org/","title":"","type":null},"content":[{"type":"text","text":"Kafka Eagle 官網下載頁面","attrs":{}}]},{"type":"text","text":" 下載壓縮包,將壓縮包解壓到 /usr/local/kafka-eagle 目錄。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"6.1.3 設置環境變量","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"編輯 /etc/profile 文件設置環境變量:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"export KE_HOME=/usr/local/kafka-eagle/kafka-eagle-web-2.0.6\nexport PATH=$PATH:$KE_HOME/bin\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"6.1.4 Kafka Eagle 配置","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"編輯 conf/system-config.properties 配置文件:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"yaml"},"content":[{"type":"text","text":"######################################\n# Kafka 集羣列表\n######################################\nkafka.eagle.zk.cluster.alias=cluster1\ncluster1.zk.list=192.168.1.6:2181,192.168.1.7:2181,192.168.1.8:2181\n\n######################################\n# Zookeeper 線程池最大連接數\n######################################\nkafka.zk.limit.size=32\n\n######################################\n# Kafka Eagle 的頁面訪問端口\n######################################\nkafka.eagle.webui.port=8048\n\n######################################\n# 存儲消費信息的類型,在 0.9 版本之前,消費\n# 信息會默認存儲在 Zookeeper 中,存儲類型\n# 設置 Zookeeper 即可;如果是在 0.10 版本之後,\n# 消費者信息默認存儲在 Kafka 中,存儲類型\n# 設置爲 kafka。\n######################################\ncluster1.kafka.eagle.offset.storage=kafka\n\n######################################\n# Kafka JMX 指標監控,Kafka 需要開啓 JMX\n######################################\ncluster1.kafka.eagle.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi\n\n######################################\n# 開啓性能監控,數據默認保留 30 天\n######################################\nkafka.eagle.metrics.charts=true\nkafka.eagle.metrics.retain=15\n\n######################################\n# kafka sql topic records max\n######################################\nkafka.eagle.sql.topic.records.max=5000\nkafka.eagle.sql.topic.preview.records.max=10\n\n######################################\n# 刪除 Kafka Topic 時需要輸入的密鑰\n######################################\nkafka.eagle.topic.token=keadmin\n\n######################################\n# 存儲 Kafka Eagle 元數據信息的數據庫\n# 目前支持 MySQL 和 Sqlite,默認使用 Sqlite 進行存儲\n######################################\nkafka.eagle.driver=org.sqlite.JDBC\nkafka.eagle.url=jdbc:sqlite:/usr/local/kafka-eagle/kafka-eagle-web-2.0.6/db/ke.db\nkafka.eagle.username=root\nkafka.eagle.password=123456\n","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"6.1.5 啓動 Kafka Eagle","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用以下命令啓動 Kafka Eagle:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"bin/ke.sh start\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"看到以下輸出,表示 Kafka Eagle 啓動成功。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sh"},"content":[{"type":"text","text":"......\nWelcome to\n __ __ ___ ____ __ __ ___ ______ ___ ______ __ ______\n / //_/ / | / __/ / //_/ / | / ____/ / | / ____/ / / / ____/\n / ,< / /| | / /_ / ,< / /| | / __/ / /| | / / __ / / / __/ \n / /| | / ___ | / __/ / /| | / ___ | / /___ / ___ |/ /_/ / / /___ / /___ \n/_/ |_| /_/ |_|/_/ /_/ |_| /_/ |_| /_____/ /_/ |_|\\____/ /_____//_____/ \n \n\nVersion 2.0.6 -- Copyright 2016-2021\n*******************************************************************\n* Kafka Eagle Service has started success.\n* Welcome, Now you can visit 'http://192.168.1.6:8048'\n* Account:admin ,Password:123456\n*******************************************************************\n* ke.sh [start|status|stop|restart|stats] \n* https://www.kafka-eagle.org/ \n*******************************************************************\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在瀏覽器輸入 http://192.168.1.6:8048 訪問 Kafka Eagle 頁面,用戶名 admin,密碼 123456。我們查看 Kafka 集羣的節點狀態。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/d0/d02609188e5ea7cf8b23e3b0d5c7130b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"也可以在 Kafka Eagle 頁面上管理 Topic。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/8c/8c600de1fe4d487db06a6a85025fcc20.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"參考資料","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://docs.confluent.io/platform/current/kafka/deployment.html","title":"","type":null},"content":[{"type":"text","text":"Confluent 官網 Running Kafka in Production","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://segmentfault.com/a/1190000039723251","title":"","type":null},"content":[{"type":"text","text":"Kafka(4)-kafka生產環境規劃部署","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://mp.weixin.qq.com/s/cU6fkgQH4ErTP-lKiVotrA","title":"","type":null},"content":[{"type":"text","text":"2萬長文,一文搞懂Kafka","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://mp.weixin.qq.com/s?__biz=MzI4NjY4MTU5Nw==&mid=2247487568&idx=1&sn=fcd54d366b4e6ca049c43a3ce9d32ce4&scene=21#wechat_redirect","title":"","type":null},"content":[{"type":"text","text":"Linux Page Cache調優在Kafka中的應用","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://mp.weixin.qq.com/s/_1mnDFITm11AzMKXqmqFzg","title":"","type":null},"content":[{"type":"text","text":"聊聊 page cache 與 Kafka 之間的事兒","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://time.geekbang.org/column/article/101107","title":"","type":null},"content":[{"type":"text","text":"Kafka線上集羣部署方案怎麼做?","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://time.geekbang.org/column/article/101171","title":"","type":null},"content":[{"type":"text","text":"最最最重要的集羣參數配置(上)","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://time.geekbang.org/column/article/101763","title":"","type":null},"content":[{"type":"text","text":"最最最重要的集羣參數配置(下)","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://blog.csdn.net/daijiguo/article/details/104871390","title":"","type":null},"content":[{"type":"text","text":"Kafka原理:kafka之mmap文件讀寫方式","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://mp.weixin.qq.com/s/IiPeTZf6wd5OLqSJ5dySJQ","title":"","type":null},"content":[{"type":"text","text":"Apache Kafka 集羣部署指南","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://cloud.tencent.com/developer/article/1421266","title":"","type":null},"content":[{"type":"text","text":"圖解Kafka的零拷貝技術到底有多牛?","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://xie.infoq.cn/article/c06fea629926e2b6a8073e2f0","title":"","type":null},"content":[{"type":"text","text":"終於知道 Kafka 爲什麼這麼快了!","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://segmentfault.com/a/1190000021175583","title":"","type":null},"content":[{"type":"text","text":"深入瞭解Kafka【一】概述與基礎架構","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://www.cnblogs.com/smartloli/p/14722529.html","title":"","type":null},"content":[{"type":"text","text":"Kafka2.8安裝","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://blog.csdn.net/zhongqi2513/article/details/120429015","title":"","type":null},"content":[{"type":"text","text":"Apache Kafka 3.0 版本發佈","attrs":{}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://blog.csdn.net/qq_41324009/article/details/100584223","title":"","type":null},"content":[{"type":"text","text":"Kafka 性能優化與問題深究","attrs":{}}]}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"歡迎關注","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c5/c52e252978565780c91364a721bfce35.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章