東南亞“美團” Grab 的搜索索引優化之法

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Grab 是一家總部位於新加坡的東南亞網約車和送餐平臺公司,業務遍及東南亞大部分地區,爲 8 個國家的 350 多座城市的 1.87 億多用戶提供服務。Grab 當前提供包括網約車、送餐、酒店預訂、網上銀行、移動支付和保險服務。是東南亞的“美團”。Grab Engineering 分享了他們對搜索索引進行優化的方法與心得,InfoQ 中文站翻譯並分享。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當今的應用程序通常使用各種數據庫引擎,每個引擎服務於特定的需求。對於 Grab Deliveries,MySQL 數據庫是用來存儲典型數據格式的,而 "},{"type":"link","attrs":{"href":"https:\/\/www.elastic.co\/cn\/elasticsearch\/","title":"xxx","type":null},"content":[{"type":"text","text":"Elasticsearch"}]},{"type":"text","text":" 則提供高級搜索功能。"},{"type":"link","attrs":{"href":"https:\/\/www.mysql.com\/cn\/","title":"xxx","type":null},"content":[{"type":"text","text":"MySQL"}]},{"type":"text","text":" 是原始數據的主要數據存儲,而 Elasticsearch 是派生存儲。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/5d\/5d974393168c21c57d3788426371066e.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"搜索數據流"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於 MySQL 和 Elasticsearch 之間的數據同步進行了很多工作。本文介紹瞭如何優化增量搜索數據索引的一系列技術。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"背景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從主數據存儲到派生數據存儲的數據同步是由數據同步平臺(Data Synchronisation Platform,DSP)Food-Puxian 處理的。就搜索服務而言,它是 MySQL 和 Elasticsearch 之間的數據同步。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當 MySQL 的每一次實時數據更新時觸發數據同步過程,它將向 Kafka 傳遞更新的數據。數據同步平臺使用 Kafka 流列表,並在 Elasticsearch 中增量更新相應的搜索索引。此過程也稱爲增量同步。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"Kafka 到數據同步平臺"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"利用 "},{"type":"link","attrs":{"href":"https:\/\/kafka.apache.org\/","title":"xxx","type":null},"content":[{"type":"text","text":"Kafka"}]},{"type":"text","text":" 流,數據同步平臺實現增量同步。“流”是一種沒有邊界的、持續更新的數據集,它是有序的、可重放的和容錯的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/a5\/a5e613806a14b7b8a6b1f1c144ddc8e7.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"利用 Kafaka 的數據同步過程"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上圖描述了使用 Kafka 進行數據同步的過程。數據生產器爲 MySQL 上的每一個操作創建一個 Kafka 流,並實時將其發送到 Kafka。數據同步平臺爲每個 Kafka 流創建一個流消費器,消費器從各自的 Kafka 流中讀取數據更新,並將其同步到 Elasticsearch。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"MySQL 到 Elasticsearch"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Elasticsearch 中的索引與 MySQL 表對應。MySQL 的數據存儲在表中,而 Elasticsearch 的數據則存儲在索引中。多個 MySQL 表被連接起來,形成一個 Elasticsearch 索引。以下代碼段展示了 MySQL 和 Elasticsearch 中的實體-關係映射。實體 A 與實體 B 有一對多的關係。實體 A 在 MySQL 中有多個相關的表,即表 A1 和 A2,它們被連接成一個 Elasticsearch 索引 A。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b2\/b23f6714e77966e5aa40e28880bcc605.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"MySQL 和 Elasticsearch 中的 ER 映射"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有時,一個搜索索引同時包含實體 A 和實體 B。對於該索引的關鍵字搜索查詢,例如“Burger”,實體 A 和實體 B 中名稱包含“Burger”的對象都會在搜索響應中返回。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"原始增量同步"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"原始 Kafaka 流"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在上面所示的 ER 圖中,數據生產器爲每個 MySQL 表都會創建一個 Kafaka 流。每當 MySQL 發生插入、更新或刪除操作時,執行操作之後的數據副本會被髮送到其 Kafka 流中。對於每個 Kafaka 流,數據同步平臺都會創建不同的流消費器(Stream Consumer),因爲它們具有不同的數據結構。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"流消費器基礎設施"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"流消費器由 3 個組件組成。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"事件調度器"},{"type":"text","text":"(Event Dispatcher):監聽並從 Kafka 流中獲取事件,將它們推送到事件緩衝區,並啓動一個 goroutine,在事件緩衝區中爲不存在 ID 的每個事件運行事件處理器。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"事件緩衝區"},{"type":"text","text":"(Event Buffer):事件通過主鍵(aID、bID 等)緩存在內存中。一個事件被緩存在緩衝區中,直到它被一個 goroutine 選中,或者當一個具有相同主鍵的新事件被推入緩衝區時被替換。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"事件處理器"},{"type":"text","text":"(Event Handler):從事件緩衝區中讀取事件,由事件調度器啓動的 goroutine 會對其進行處理。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/27\/272440b43b383473137153c54b80793e.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"流消費器基礎設施"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"事件緩衝區過程"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"事件緩衝區由許多子緩衝區組成,每個子緩衝區具有一個唯一的 ID,該 ID 是緩衝區中事件的主鍵。一個子緩衝區的最大尺寸爲 1。這樣,事件緩衝區就可以重複處理緩衝區中具有相同 ID 的事件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下圖展示了將事件推送到事件緩衝區的過程。在將新事件推送到緩衝區時,將替換共享相同 ID 的舊事件。結果,被替換的事件不會被處理。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/28\/28e55e09374d8afb4d3bc99a04cd0286.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"將事件推送到事件緩衝區"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"事件處理器過程"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面的流程圖顯示了由事件處理器執行的程序。其中包括公共處理器流程(白色),以及針對對象 B 事件的附加過程(綠色)。當通過從數據庫中加載的數據創建一個新的 Elasticsearch 文檔時,它會從 Elasticsearch 獲取原始文檔,比較是否有更改字段,並決定是否需要向 Elasticsearch 發送新文檔。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在處理對象 B 事件時,它還根據公共處理器級聯更新到 Elasticsearch 索引中的相關對象 A。我們將這種操作命名爲“級聯更新”(Cascade Update)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/56\/564e4df311ee29a1bffe25b787d9982a.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"事件處理器執行的過程"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"原始基礎設施存在的問題"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Elasticsearch 索引中的數據可以來自多個 MySQL 表,如下所示。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/dc\/dc0944e9b985df30ab1698312c2ff83e.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"Elasticsearch 索引中的數據"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"原始基礎設施存在一些問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"繁重的數據庫負載"},{"type":"text","text":":消費器從 Kafka 流中讀取數據,將流事件視爲通知,然後使用 ID 從數據庫中加載數據,創建新的 Elasticsearch 文檔。流事件中的數據並沒有得到很好的利用。每次從數據庫加載數據,然後創建新的 Elasticsearch 文檔,都會導致大量的數據庫流量。數據庫成爲一個瓶頸。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"數據丟失"},{"type":"text","text":":生產器在應用程序代碼中向 Kafka 發送數據副本。通過 MySQL 命令行工具(command-line tool,CLT)或其他數據庫管理工具進行的數據更改會丟失。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"與 MySQL 表結構的緊密耦合"},{"type":"text","text":":如果生產器在 MySQL 中的現有表中添加了一個新的列,並且這個列需要同步到 Elasticsearch,那麼數據同步平臺就無法捕捉到這個列的數據變化,直到生產器進行代碼修改並將這個列添加到相關的 Kafka 流中。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"冗餘的 Elasticsearch 更新"},{"type":"text","text":":Elasticsearch 數據是 MySQL 數據的一個子集。生產器將數據發佈到 Kafka 流中,即使對與 Elasticsearch 無關的字段進行了修改。這些與 Elasticsearch 無關的流事件仍會被拾取。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"重複的級聯更新"},{"type":"text","text":":考慮一種情況,即搜索索引同時包含對象 A 和對象 B,在很短的時間內對對象 B 產生大量的更新。所有的更新將被級聯到同時包含對象 A 和 B 的索引,這會爲數據庫帶來大量流量。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"優化增量同步"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"MySQL 二進制日誌"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MySQL 二進制日誌(Binlog)是一組日誌文件,其中包含對 MySQL 服務器實例進行的數據修改信息。它包含所有更新數據的語句。二進制日誌有兩種類型。"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"基於語句的日誌記錄"},{"type":"text","text":":事件包含產生數據更改(插入、更新、刪除)的 SQL 語句。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"基於行的日誌記錄"},{"type":"text","text":":事件描述了單個行的更改。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Grab Caspian 團隊(Data Tech)構建了一個基於 MySQL 基於行的二進制日誌的變更數據捕獲(Change Data Capture,CDC)系統。它能夠捕獲所有 MySQL 表的所有數據修改。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"當前 Kafaka 流"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"二進制日誌流事件定義是一種普通的數據結構,包含三個主要字段:Operation、PayloadBefore 和 PayloadAfter。Operation 的枚舉是創建、刪除和更新。Payload 是 JSON 字符串格式的數據。所有二進制日誌流都遵循相同的流事件定義。利用二進制日誌事件中的 PayloadBefore 和 PayloadAfter,在數據同步平臺上對增量同步進行優化成爲可能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/f7\/f78e6300e7c6e7aa65700158baf7acc4.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"二進制日誌流事件主要字段"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"流消費器優化"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"事件處理器優化"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"優化 1"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"請記住,上面提到過 Elasticsearch 存在冗餘更新問題,Elasticsearch 數據是 MySQL 數據的一個子集。第一個優化是通過檢查 PayloadBefore 和 PayloadAfter 之間的不同字段是否位於 Elasticsearch 數據子集中,從而過濾掉無關的流事件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"二進制日誌事件中的 Payload 是 JSON 字符串,所以定義了一個數據結構來解析 PayloadBefore 和 PayloadAfter,其中僅包含 Elasticsearch 數據中存在的字段。對比解析後的 Payload,我們很容易知道這個更改是否與 Elasticsearch 相關。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下圖顯示了經過優化的事件處理器流。從藍色流程可以看出,在處理事件時,首先對 PayloadBefore 和 PayloadAfter 進行比較。僅在 PayloadBefore 和 PayloadAfter 之間存在差異時,才處理該事件。因爲無關的事件已經被過濾掉,所以沒有必要從 Elasticsearch 中獲取原始文件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/68\/683c13f3814f10cf672a6c344351f0eb.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"事件處理器優化 1"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"成效"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"沒有數據丟失。使用 MySQL CLT 或其他數據庫管理工具進行的更改可以被捕獲。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對 MySQL 表的定義沒有依賴性。所有的數據都是 JSON 字符串格式。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不存在多餘的 Elasticsearch 更新和數據庫讀取。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Elasticsearch 讀取流量減少 90%。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不再需要從 Elasticsearch 獲取原始文檔與新創建的文檔進行比較。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"過濾掉 55% 的不相關流事件。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"數據庫負載降低 55%。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/3c\/3c07936e677314368d8ee637ce02d0d6.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"針對優化 1 的 Elasticsearch 事件更新"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"優化 2"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"事件中的 PayloadAfter 提供了更新的數據。因此,我們開始思考是否需要一種全新的從多個 MySQL 表讀取的 Elasticsearch 文檔。第二個優化是利用二進制日誌事件的數據差異,改爲部分更新。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下圖展示了部分更新的事件處理程序流程。如紅色流所示,沒有爲每個事件創建一個新的 Elasticsearch 文檔,而是首先檢查該文檔是否存在。加入文檔存在(大部分時間都存在),則在此事件中更改數據,只要 PayloadBefore 和 PayloadAfter 之間的比較就會更新到現有的 Elasticsearch 文檔。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/76\/76e06a1ba69db8eecaa5920c60c8f3e4.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"事件處理器優化 2"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"成效"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"將大部分 Elasticsearch 相關事件更改爲部分更新:使用流事件中的數據來更新 Elasticsearch。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Elasticsearch 負載減少:只將 Elasticsearch 發送修改的字段。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"數據庫負載減少:基於優化 1,數據庫負載減少 80%。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/e1\/e1936030d5bd99806ccb77fec83794b1.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"事件緩衝區優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在把新事件推送到事件緩衝區的時候,我們不會替換舊事件,而會把新事件和舊事件合併。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"事件緩衝區中每個子緩衝區的尺寸爲 1。在這種優化中,流事件不再被視爲通知。我們使用事件中的 Payload 來執行部分更新。替換舊事件的舊過程已經不再適用於二進制日誌流。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當事件調度器將一個新的事件推送到事件緩衝區的一個非空的子緩衝區時,它會將把子緩衝區中的事件 A 和新的事件 B 合併成一個新的二進制日誌事件 C,其 PayloadBefore 來自事件 A,而 PayloadAfter 來自事件 B。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/5e\/5ed4b00b88bcb2e5ca87befef3a06db0.jpeg","alt":"image.png","title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"合併事件緩衝區優化的操作"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"級聯更新優化"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們使用一個新的流來處理級聯更新事件。當生產器發送數據到 Kafka 流時,共享相同 ID 的數據將被存儲在同一個分區上。每一個數據同步平臺服務實例只有一個流消費器。在消費器消費 Kafaka 流時,一個分區僅由一個消費器消費。因此,共享相同 ID 的級聯更新事件將由同一個 EC2 實例上的一個流消費器所消費。有了這種特殊的機制,內存中的事件緩衝區能夠重複使用大部分共享相同 ID 的級聯更新事件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以下流程圖展示了優化後的事件處理程序。綠色顯示的是原始流,而紫色顯示的是當前流,帶有級聯更新事件。在處理對象 B 的事件時,事件處理器不會直接級聯更新相關對象 A,而是發送一個級聯更新事件到新的流。這個新流的消費器將處理級聯更新事件,並將對象 A 的數據同步到 Elasticsearch 中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/05\/053a90ee06a9ccda5bfa3fa0cc5721fb.jpeg","alt":"image.png","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"帶有級聯更新的事件處理器"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"成效"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"級聯更新事件消除了 80% 的重複數據。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"級聯更新引入的數據庫負載減少。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/d2\/d2895a86d50960d5f4d48e1226fbebf6.jpeg","alt":"image.png","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"級聯更新事件"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文介紹了四種不同的數據同步平臺優化方法。在改用 Coban 團隊提供的 MySQL 二進制日誌流並對流消費器進行優化後,數據同步平臺節省了約 91% 的數據庫讀取和 90% 的 Elasticsearch 讀取,流消費器處理的流流量的平均查詢次數(Queries Per Second,QPS)從 200 次增加到 800 次。高峯時段的平均查詢次數最大可達到 1000 次以上。隨着平均查詢次數的提高,處理數據的時間和從 MySQL 到 Elasticsearch 的數據同步的延遲都有所減少。經過優化,數據同步平臺的數據同步能力得到顯著的提高。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文鏈接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/engineering.grab.com\/search-indexing-optimisation","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/engineering.grab.com\/search-indexing-optimisation"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章