美團把 Kafka 作爲應用層緩存的實踐

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka在美團數據平臺承擔着統一的數據緩存和分發的角色,針對因PageCache互相污染,進而引發PageCache競爭導致實時作業被延遲作業影響的痛點,美團基於SSD自研了Kafka的應用層緩存架構。本文主要介紹了該架構的設計與實現,主要包括方案選型,與其他備選方案的比較以及方案的核心思考點等,最後介紹該方案與其他備選方案的性能對比。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Kafka在美團數據平臺的現狀"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka出色的I\/O優化以及多處異步化設計,相比其他消息隊列系統具有更高的吞吐,同時能夠保證不錯的延遲,十分適合應用在整個大數據生態中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前在美團數據平臺中,Kafka承擔着數據緩衝和分發的角色。如下圖所示,業務日誌、接入層Nginx日誌或線上DB數據通過數據採集層發送到Kafka,後續數據被用戶的實時作業消費、計算,或經過數倉的ODS層用作數倉生產,還有一部分則會進入公司統一日誌中心,幫助工程師排查線上問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b3\/b37a27343282a698fbc203008d1ed19d.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前美團線上Kafka規模:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"集羣規模"},{"type":"text","text":":節點數達6000+,集羣數100+。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"集羣承載"},{"type":"text","text":":Topic數6萬+,Partition數41萬+。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"處理的消息規模"},{"type":"text","text":":目前每天處理消息總量達8萬億,峯值流量爲1.8億條\/秒"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"提供的服務規模"},{"type":"text","text":":目前下游實時計算平臺運行了3萬+作業,而這其中絕大多數的數據源均來自Kafka。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Kafka線上痛點分析&核心目標"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當前Kafka支撐的實時作業數量衆多,單機承載的Topic和Partition數量很大。這種場景下很容易出現的問題是:同一臺機器上不同Partition間競爭PageCache資源,相互影響,導致整個Broker的處理延遲上升、吞吐下降。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下來,我們將結合Kafka讀寫請求的處理流程以及線上統計的數據來分析一下Kafka在線上的痛點。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"原理分析"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/ed\/ed08de332d3f5418fa82f67a4711c1d3.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"Kafka處理讀寫流程的示意圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"對於Produce請求"},{"type":"text","text":":Server端的I\/O線程統一將請求中的數據寫入到操作系統的PageCache後立即返回,當消息條數到達一定閾值後,Kafka應用本身或操作系統內核會觸發強制刷盤操作(如左側流程圖所示)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"對於Consume請求"},{"type":"text","text":":主要利用了操作系統的ZeroCopy機制,當Kafka Broker接收到讀數據請求時,會向操作系統發送sendfile系統調用,操作系統接收後,首先試圖從PageCache中獲取數據(如中間流程圖所示);如果數據不存在,會觸發缺頁異常中斷將數據從磁盤讀入到臨時緩衝區中(如右側流程圖所示),隨後通過DMA操作直接將數據拷貝到網卡緩衝區中等待後續的TCP傳輸。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"綜上所述,Kafka對於單一讀寫請求均擁有很好的吞吐和延遲。處理寫請求時,數據寫入PageCache後立即返回,數據通過異步方式批量刷入磁盤,既保證了多數寫請求都能有較低的延遲,同時批量順序刷盤對磁盤更加友好。處理讀請求時,實時消費的作業可以直接從PageCache讀取到數據,請求延遲較小,同時ZeroCopy機制能夠減少數據傳輸過程中用戶態與內核態的切換,大幅提升了數據傳輸的效率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但當同一個Broker上同時存在多個Consumer時,就可能會由於多個Consumer競爭PageCache資源導致它們同時產生延遲。下面我們以兩個Consumer爲例詳細說明:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/84\/844bddb7835ccf01aa7ca8b6fa24a5a0.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如上圖所示,Producer將數據發送到Broker,PageCache會緩存這部分數據。當所有Consumer的消費能力充足時,所有的數據都會從PageCache讀取,全部Consumer實例的延遲都較低。此時如果其中一個Consumer出現消費延遲(圖中的Consumer Process2),根據讀請求處理流程可知,此時會觸發磁盤讀取,在從磁盤讀取數據的同時會預讀部分數據到PageCache中。當PageCache空間不足時,會按照LRU策略開始淘汰數據,此時延遲消費的Consumer讀取到的數據會替換PageCache中實時的緩存數據。後續當實時消費請求到達時,由於PageCache中的數據已被替換掉,會產生預期外的磁盤讀取。這樣會導致兩個後果:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"消費能力充足的Consumer消費時會失去PageCache的性能紅利。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"多個Consumer相互影響,預期外的磁盤讀增多,HDD負載升高。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們針對HDD的性能和讀寫併發的影響做了梯度測試,如下圖所示:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/f7\/f7c89c7528bbf5557b13460995e78d1b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以看到,隨着讀併發的增加,HDD的IOPS和帶寬均會明顯下降,這會進一步影響整個Broker的吞吐以及處理延遲。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"線上數據統計"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前Kafka集羣TP99流量在170MB\/s,TP95流量在100MB\/s,TP50流量爲50-60MB\/s;單機的PageCache平均分配爲80GB,取TP99的流量作爲參考,在此流量以及PageCache分配情況下,PageCache最大可緩存數據時間跨度爲80*1024\/170\/60 = 8min,可見當前Kafka服務整體對延遲消費作業的容忍性極低。該情況下,一旦部分作業消費延遲,實時消費作業就可能會受到影響。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同時,我們統計了線上實時作業的消費延遲分佈情況,延遲範圍在0-8min(實時消費)的作業只佔80%,說明目前存在線上存在20%的作業處於延遲消費的狀態。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"痛點分析總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"總結上述的原理分析以及線上數據統計,目前線上Kafka存在如下問題:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"實時消費與延遲消費的作業在PageCache層次產生競爭,導致實時消費產生非預期磁盤讀。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"傳統HDD隨着讀併發升高性能急劇下降。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"線上存在20%的延遲消費作業。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"按目前的PageCache空間分配以及線上集羣流量分析,Kafka無法對實時消費作業提供穩定的服務質量保障,該痛點亟待解決。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"預期目標"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根據上述痛點分析,我們的預期目標爲保證實時消費作業不會由於PageCache競爭而被延遲消費作業影響,保證Kafka對實時消費作業提供穩定的服務質量保障。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"解決方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"爲什麼選擇SSD"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根據上述原因分析可知,解決目前痛點可從以下兩個方向來考慮:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"消除實時消費與延遲消費間的PageCache競爭,如:讓延遲消費作業讀取的數據不回寫PageCache,或增大PageCache的分配量等。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"在HDD與內存之間加入新的設備,該設備擁有比HDD更好的讀寫帶寬與IOPS。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於第一個方向,由於PageCache由操作系統管理,若修改其淘汰策略,那麼實現難度較爲複雜,同時會破壞內核本身對外的語義。另外,內存資源成本較高,無法進行無限制的擴展,因此需要考慮第二個方向。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SSD目前發展日益成熟,相較於HDD,SSD的IOPS與帶寬擁有數量級級別的提升,很適合在上述場景中當PageCache出現競爭後承接部分讀流量。我們對SSD的性能也進行了測試,結果如下圖所示:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/ea\/ea5ddc9a9f7e86a05d98a136a359f851.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從圖中可以發現,隨着讀取併發的增加,SSD的IOPS與帶寬並不會顯著降低。通過該結論可知,我們可以使用SSD作爲PageCache與HDD間的緩存層。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"架構決策"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在引入SSD作爲緩存層後,下一步要解決的關鍵問題包括PageCache、SSD、HDD三者間的數據同步以及讀寫請求的數據路由等問題,同時我們的新緩存架構需要充分匹配Kafka引擎讀寫請求的特徵。本小節將介紹新架構如何在選型與設計上解決上述提到的問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka引擎在讀寫行爲上具有如下特性:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"數據的消費頻率隨時間變化,越久遠的數據消費頻率越低。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"每個分區(Partition)只有Leader提供讀寫服務。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於一個客戶端而言,消費行爲是線性的,數據並不會重複消費。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下文給出了兩種備選方案,下面將對兩種方案給出我們的選取依據與架構決策。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"備選方案一:基於操作系統內核層實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前開源的緩存技術有FlashCache、BCache、DM-Cache、OpenCAS等,其中BCache和DM-Cache已經集成到Linux中,但對內核版本有要求,受限於內核版本,我們僅能選用FlashCache\/OpenCAS。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如下圖所示,FlashCache以及OpenCAS二者的核心設計思路類似,兩種架構的核心理論依據爲“數據局部性”原理,將SSD與HDD按照相同的粒度拆成固定的管理單元,之後將SSD上的空間映射到多塊HDD層的設備上(邏輯映射or物理映射)。在訪問流程上,與CPU訪問高速緩存和主存的流程類似,首先嚐試訪問Cache層,如果出現CacheMiss,則會訪問HDD層,同時根據數據局部性原理,這部分數據將回寫到Cache層。如果Cache空間已滿,會通過LRU策略替換部分數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/2a\/2af60399982d367b1e4e2c0d7bd1cc5d.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FlashCache\/OpenCAS提供了四種緩存策略:WriteThrough、WriteBack、WriteAround、WriteOnly。由於第四種不做讀緩存,這裏我們只看前三種。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"寫入:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"WriteThrough"},{"type":"text","text":":數據寫操作在寫入SSD的同時會寫入到後端存儲。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"WriteBack"},{"type":"text","text":":數據寫操作僅寫入SSD即返回,由緩存策略flush到後臺存儲。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"WriteAround"},{"type":"text","text":":數據寫入操作直接寫入後端存儲,同時SSD對應的緩存會失效。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"讀取:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"WriteThrough\/WriteBack\/WriteAround"},{"type":"text","text":":首先讀取SSD,命中不了的將再次讀取後端存儲,並數據會被刷入到SSD緩存中。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/f6\/f68fa9557ab5f4c5947198225549b1c3.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"更多詳細實現細節,極大可參見這二者的官方文檔:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FlashCache"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"OpenCAS"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"備選方案二:Kafka應用內部實現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上文提到的第一類備選方案中,核心的理論依據“數據局部性”原理與Kafka的讀寫特性並不能完全吻合,“數據回刷”這一特性依然會引入緩存空間污染問題。同時,上述架構基於LRU的淘汰策略也與Kafka讀寫特性存在矛盾,在多Consumer併發消費時,LRU淘汰策略可能會誤淘汰掉一些近實時數據,導致實時消費作業出現性能抖動。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可見,備選方案一併不能完全解決當前Kafka的痛點,需要從應用內部進行改造。整體設計思路如下,將數據按照時間維度分佈在不同的設備中,近實時部分的數據緩存在SSD中,這樣當出現PageCache競爭時,實時消費作業從SSD中讀取數據,保證實時作業不會受到延遲消費作業影響。下圖展示了基於應用層實現的架構處理讀請求的流程:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/07\/07bfdfdf56383cf778f12c19d5777fe7.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當消費請求到達Kafka Broker時,Kafka Broker直接根據其維護的消息偏移量(Offset)和設備的關係從對應的設備中獲取數據並返回,並且在讀請求中並不會將HDD中讀取的數據回刷到SSD,防止出現緩存污染。同時訪問路徑明確,不會由於Cache Miss而產生的額外訪問開銷。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下表對不同候選方案進行了更加詳細的對比:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/ad\/ad15f2010ef8afd110ddaac193ae9a9b.webp","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最終,結合與Kafka讀寫特性的匹配度,整體工作量等因素綜合考慮,我們採用Kafka應用層實現這一方案,因爲該方案更貼近Kafka本身讀寫特性,能更加徹底地解決Kafka的痛點。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"新架構設計"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"概述"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根據上文對Kafka讀寫特性的分析,我們給出應用層基於SSD的緩存架構的設計目標:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"數據按時間維度分佈在不同的設備上,近實時數據分佈在SSD上,隨時間的推移淘汰到HDD上。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Leader分區中所有數據均寫入SSD中。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從HDD中讀取的數據不回刷到SSD中。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"依據上述目標,我們給出應用層基於SSD的Kafka緩存架構實現:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka中一個Partition由若干LogSegment構成,每個LogSegment包含兩個索引文件以及日誌消息文件。一個Partition的若干LogSegment按Offset(相對時間)維度有序排列。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/35\/35a8345abca03377ae487105165af24d.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根據上一小節的設計思路,我們首先將不同的LogSegment標記爲不同的狀態,如圖所示(圖中上半部分)按照時間維度分爲OnlyCache、Cached以及WithoutCache三種常駐狀態。而三種狀態的轉換以及新架構對讀寫操作的處理如圖中下半部分所示,其中標記爲OnlyCached狀態的LogSegment只存儲在SSD上,後臺線程會定期將Inactive(沒有寫流量)的LogSegment同步到SSD上,完成同步的LogSegment被標記爲Cached狀態。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最後,後臺線程將會定期檢測SSD上的使用空間,當空間達到閾值時,後臺線程將會按照時間維度將距離現在最久的LogSegment從SSD中移除,這部分LogSegment會被標記爲WithoutCache狀態。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於寫請求而言,寫入請求依然首先將數據寫入到PageCache中,滿足閾值條件後將會刷入SSD。對於讀請求(當PageCache未獲取到數據時),如果讀取的offset對應的LogSegment的狀態爲Cached或OnlyCache,則數據從SSD返回(圖中LC2-LC1以及RC1),如果狀態爲WithoutCache,則從HDD返回(圖中LC1)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於Follower副本的數據同步,可根據Topic對延遲以及穩定性的要求,通過配置決定寫入到SSD還是HDD。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"關鍵優化點"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上文介紹了基於SSD的Kafka應用層緩存架構的設計概要以及核心設計思路,包括讀寫流程、內部狀態管理以及新增後臺線程功能等。本小節將介紹該方案的關鍵優化點,這些優化點均與服務的性能息息相關。主要包括LogSegment同步以及Append刷盤策略優化,下面將分別進行介紹。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"LogSegment同步"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"LogSegment同步是指將SSD上的數據同步到HDD上的過程,該機制在設計時主要有以下兩個關鍵點:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"同步的方式"},{"type":"text","text":":同步方式決定了HDD上對SSD數據的可見時效性,從而會影響故障恢復以及LogSegment清理的及時性。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"同步限速"},{"type":"text","text":":LogSegment同步過程中通過限速機制來防止同步過程中對正常讀寫請求造成影響"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"同步方式"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"關於LogSegment的同步方式,我們給出了三種備選方案,下表列舉了三種方案的介紹以及各自的優缺點:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/6b\/6b4f592070dc4bc3e1af8b2e267efce7.webp","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最終,我們對一致性維護代價、實現複雜度等因素綜合考慮,選擇了後臺同步Inactive的LogSegment的方式。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"同步限速"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"LogSegment同步行爲本質上是設備間的數據傳輸,會同時在兩個設備上產生額外的讀寫流量,佔用對應設備的讀寫帶寬。同時,由於我們選擇了同步Inactive部分的數據,需要進行整段的同步。如果在同步過程中不加以限制會對服務整體延遲造成較大的影響,主要表現在下面兩個方面:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從單盤性能角度,由於SSD的性能遠高於HDD,因此在數據傳輸時,HDD寫入帶寬會被寫滿,此時其他的讀寫請求會出現毛刺,如果此時有延遲消費從HDD上讀取數據或者Follower正在同步數據到HDD上,會造成服務抖動。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從單機部署的角度,單機會部署2塊SSD與10塊HDD,因此在同步過程中,1塊SSD需要承受5塊HDD的寫入量,因此SSD同樣會在同步過程中出現性能毛刺,影響正常的請求響應延遲。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於上述兩點,我們需要在LogSegment同步過程中增加限速機制,總體的限速原則爲在不影響正常讀寫請求延遲的情況下儘可能快速地進行同步。因爲同步速度過慢會導致SSD數據無法被及時清理而最終被寫滿。同時爲了可以靈活調整,該配置也被設置爲單Broker粒度的配置參數。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"日誌追加刷盤策略優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除了同步問題,數據寫入過程中的刷盤機制同樣影響服務的讀寫延遲。該機制的設計不僅會影響新架構的性能,對原生Kafka同樣會產生影響。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下圖展示了單次寫入請求的處理流程:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/22\/22dff9243f120d74e3b604dba3fdd90f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在Produce請求處理流程中,首先根據當前LogSegment的位置與請求中的數據信息確定是否需要滾動日誌段,隨後將請求中的數據寫入到PageCache中,更新LEO以及統計信息,最後根據統計信息確定是否需要觸發刷盤操作,如果需要則通過"},{"type":"codeinline","content":[{"type":"text","text":"fileChannel.force"}]},{"type":"text","text":"強制刷盤,否則請求直接返回。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在整個流程中,除日誌滾動與刷盤操作外,其他操作均爲內存操作,不會帶來性能問題。日誌滾動涉及文件系統的操作,目前,Kafka中提供了日誌滾動的擾動參數,防止多個Segment同時觸發滾動操作給文件系統帶來壓力。針對日誌刷盤操作,目前Kafka給出的機制是以固定消息條數觸發強制刷盤(目前線上爲50000),該機制只能保證在入流量一定時,消息會以相同的頻率刷盤,但無法限制每次刷入磁盤的數據量,對磁盤的負載無法提供有效的限制。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如下圖所示,爲某磁盤在午高峯時間段write_bytes的瞬時值,在午高峯時間段,由於寫入流量的上升,在刷盤過程中會產生大量的毛刺,而毛刺的值幾乎接近磁盤最大的寫入帶寬,這會使讀寫請求延遲發生抖動。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/3f\/3f1352f3778e34b9bfb2eadbd1c0e6b1.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"針對該問題,我們修改了刷盤的機制,將原本的按條數限制修改爲按實際刷盤的速率限制,對於單個Segment,刷盤速率限制爲2MB\/s。該值考慮了線上實際的平均消息大小,如果設置過小,對於單條消息較大的Topic會過於頻繁的進行刷新,在流量較高時反而會加重平均延遲。目前該機制已在線上小範圍灰度,右圖展示了灰度後同時間段對應的write_bytes指標,可以看到相比左圖,數據刷盤速率較灰度前明顯平滑,最高速率僅爲40MB\/s左右。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於SSD新緩存架構,同樣存在上述問題,因此在新架構中,在刷盤操作中同樣對刷盤速率進行了限制。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"方案測試"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"測試目標"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"驗證基於應用層的SSD緩存架構能夠避免實時作業受到延遲作業的影響。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"驗證相比基於操作系統內核層實現的緩存層架構,基於應用層的SSD架構在不同流量下讀寫延遲更低。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"測試場景描述"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"構建4個集羣:新架構集羣、普通HDD集羣、FlashCache集羣、OpenCAS集羣。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"每個集羣3個節點。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"固定寫入流量,比較讀、寫耗時。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"延遲消費設置:只消費相對當前時間10~150分鐘的數據(超過PageCache的承載區域,不超過SSD的承載區域)。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"測試內容及重點關注指標"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Case1: 僅有延遲消費時,觀察集羣的生產和消費性能。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"重點關注的指標:寫耗時、讀耗時,通過這2個指標體現出讀寫延遲。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"命中率指標:HDD讀取量、HDD讀取佔比(HDD讀取量\/讀取總量)、SSD讀取命中率,通過這3個指標體現出SSD緩存的命中率。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Case2: 存在延遲消費時,觀察實時消費的性能。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"重點指標:實時作業的SLA(服務質量)的5個不同時間區域的佔比情況。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"測試結果"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"從單Broker請求延遲角度看:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在刷盤機制優化前,SSD新緩存架構在所有場景下,較其他方案都具有明顯優勢。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/ae\/aea7a1fa19b314b2eab29346ccbca568.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"刷盤機制優化後,其餘方案在延遲上服務質量有提升,在較小流量下由於Flush機制的優化,新架構與其他方案的優勢變小。當單節點寫入流量較大時(大於170MB)優勢明顯。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b9\/b9af791e0e1f01a2eecfc92c935d3460.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"從延遲作業對實時作業的影響方面看:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"新緩存架構在測試所涉及的所有場景中,延遲作業都不會對實時作業產生影響,符合預期。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/c9\/c9771a4c01a8e3cf0e20c0f1586dd5c7.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"總結與未來展望"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kafka在美團數據平臺承擔統一的數據緩存和分發的角色,針對目前由於PageCache互相污染、進而引發PageCache競爭導致實時作業被延遲作業影響的痛點,我們基於SSD自研了Kafka的應用層緩存架構。本文主要介紹Kafka新架構的設計思路以及與其他開源方案的對比。與普通集羣相比,新緩存架構具備非常明顯的優勢:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"降低讀寫耗時"},{"type":"text","text":":比起普通集羣,新架構集羣讀寫耗時降低80%。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"實時消費不受延遲消費的影響"},{"type":"text","text":":比起普通集羣,新架構集羣實時讀寫性能穩定,不受延時消費的影響。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前,這套緩存架構優已經驗證完成,正在灰度階段,未來也優先部署到高優集羣。其中涉及的代碼也將提交給Kafka社區,作爲對社區的回饋,也歡迎大家跟我們一起交流。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"作者簡介:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"世吉,仕祿,均爲美團數據平臺工程師。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"本文轉載自美團技術團隊(ID:meituantech)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文鏈接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s\/_W81tlNQo2ScstoOBaTLzg","title":"xxx","type":null},"content":[{"type":"text","text":"https:\/\/mp.weixin.qq.com\/s\/_W81tlNQo2ScstoOBaTLzg"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章