微信搜索引擎中索引的分佈式演進

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"一、引言"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"提起分佈式,不少人能很清晰的闡述paxos、CAP等理論,但我們在遇到一個具體的分佈式問題時,很少有人能知道如何做出一個“好”的設計。對於當前的很多分佈式數據系統,包括開源的HBase、ElasticSearch等,我們一般只知其然,很少能夠知其所以然。因爲幾乎所有的分佈式數據系統,都會根據自身情況,對實際場景做一些假設,有所舍取,這種多樣性也增加了我們的理解難度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"筆者從業八年,先後從事過分佈式存儲系統、搜索系統的開發和設計。本文將通過搜一搜場景下的搜索引擎的分佈式演化,闡述分佈式數據系統在設計中的權衡,希望能給各位讀者帶來一點啓發和幫助。這裏假設讀者已瞭解常用的分佈式以及搜索的基本理論,具體細節不再冗述。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"二、背景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":"br"}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"先來看一下維基對搜索引擎的定義:搜索引擎是一種信息檢索系統,旨在協助搜索存儲在計算機系統中的信息。大家最熟悉的商業搜索系統莫過於baidu、google,而ElasticSearch (ES)是迄今爲止最爲成功的開源搜索引擎。在搜索引擎中,通常會採用倒排索引,用以提升檢索性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"相比商業系統,ES更注重易用性,採用了對等架構,每個數據節點既處理寫入請求,又處理檢索請求。所以ES更適用於對搜索性能並不敏感的業務,在最經典ELK中,ES就用於日誌搜索分析。在成熟的商業系統中,對檢索性能穩定性要求比較苛刻,數據寫入時需要儘可能少的影響搜索性能,所以更多情況下會將資源消耗比較大的建索引部分拆分到離線來做。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"筆者所在的微信搜一搜中,搜索引擎也分爲在線離線兩部分,離線用於創建索引,在線用於檢索"},{"type":"text","text":"。事實上,包括百度在內的大多數企業級搜索系統都採用了這類分離的架構。下圖爲項目初期的搜一搜索引管理架構:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/3d\/3d2ece61a5ba223fc13e433144baad7e.png","alt":"Image","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如上圖所示,文檔在寫入Indexer後,由Indexer離線創建並管理索引。Searcher從Indexer拉取已建完索引,提供在線檢索服務,Searcher模塊中不同節點的索引數據完全一致,互爲鏡像。Indexer同步承擔了索引管理功能,爲無法擴容的單點。對於千萬級文檔中小業務來說,如果對數據流可靠性要求不苛刻,這裏尚能運行良好。但隨着文檔量越來越大,Indexer 和 searcher在性能、可擴展性和容錯等方面的問題凸顯,這種簡單架構已經無法滿足需求,亟需引入分佈式管理。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"三、數據分片"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":"br"}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"分佈式解決問題的核心方式是將大任務分解成小任務,分別運行在不同節點上,以加速任務處理。對數據來說也類似,我們可以對數據進行切分,切分後的數據稱爲分片,不同分片分散到各個節點各自處理。業界對分片的叫法五花八門,在ES和MongoDB中叫shard,在HBase中叫region,在Bigtable中叫tablet,另外還有vnode,vbucket等稱呼。本文將沿用ES的叫法,如無特別提示,shard即指分片。"},{"type":"text","marks":[{"type":"strong"}],"text":"這裏需要區分的一個概念是分片和副本,分片是對數據的切分,副本是對分片的拷貝"},{"type":"text","text":"。如果要求更高的讀取性能,通常需要增加副本數,如果數據量快速上漲,則可能需要更多的分片。另外一個非常容易與分片(shard)混淆的概念是分區(partition),二者有時甚至會直接混用。不過一般來說分片是橫向切分,多數按key劃分;而分區通常更像是一種縱向切分,比如按時間劃分。key相同的文檔先後進入系統,一定會屬於一個分片,但可能被劃分到多個分區中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在對數據進行分片時,一個關鍵點是不同分片間需要儘量避免數據傾斜。分片的拆分方式大致有兩種,一種是按key的字母序劃分,另一種是通過對key進行hash後取餘的方式來劃分。Hash的方式用的更多一些,只要hash算法足夠均勻,就可以避免數據傾斜問題,在MongoDB中用的MD5 和 Redis中用的CRC16都是分散性非常好的算法。通常情況下,分片都會作爲數據管理和遷移的最小單位,分片和副本要求能均勻並分散的劃分到不同的節點。在擴容或縮容節點時,數據需要在節點間的重新再分配,即再均衡過程。"},{"type":"text","marks":[{"type":"strong"}],"text":"再均衡過程中,多數系統都要求儘可能少的影響讀寫性能,再均衡後也需要分片在節點間儘量均勻和分散"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"應對再均衡需求,分佈式中常見做法有三類:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"固定分片個數"},{"type":"text","text":":分片數在系統初始時選定,數據量增加時,單個分片的數據量相應增加。新擴容節點時,遷移部分分片到新節點,縮容時,反向遷移。這種方式簡單,易操作,ES就採用了該方式。但其也有相應的缺點:對數據量漲幅或降幅比較大的系統,初始搭建時很難確定合適的分片數。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"動態分片數"},{"type":"text","text":":當分片中的數據的增長到一定值時,就會拆分分片;如果分片中數據量過少,則會進行分片合併。在Hbase中,單個region的大小默認是10G,過大則會觸發拆分。相比上述固定分片的方式,這種方式主要優點是分片數可以自動適配數據量,不再有初始選擇分片數的煩惱。但在系統初始導入數據時,會由於分區的多次拆分而嚴重影響讀寫性能。所以在Hbase和MongoDB中,均允許配置一組初始分區,來規避該問題。由於這種分片方式更復雜,部分系統還會提供人工干預的措施。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"按節點數分片"},{"type":"text","text":":上述兩種分片方式,均與節點無關,擴容時通過遷移分片來均衡。還有一種分片方式比上述兩種更廣爲人知——一致性哈希:每個節點對應固定數量的分片,如果需要擴容節點,則同時增加相應的分區數,通過數據在分區之間的遷移來達到均衡。實際使用中,一致性哈希常常需要會引入vnode,來避免數據傾斜。由於對遷移不如上述兩種方式友好,所以該方式在數據系統中的應用不廣泛。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"固定分片的方式相比是最爲常用的,但如上所述,在系統初始搭建時,需要選擇遠大於節點數的分片數,爲後續擴容預留空間"},{"type":"text","text":"。在ES中,每個分片都是一個依賴Lucene的獨立引擎,負責數據的存儲和檢索。這限制了其在初始搭建時的分片數選擇,因爲過多的分片數會使得請求量放大,從而導致性能的急劇下降。針對該問題,存儲系統Ceph有個很好的解決方案:其分片爲邏輯概念,每臺數據節點都可以承載多個邏輯分片,所以可以在初始階段就選擇較大的分片數。相比ES,Ceph可以這麼做的主要祕訣是:存儲系統的數據分片並不需要一個獨立的引擎做支撐。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/08\/0872a548e7e65ee2834220d4acda838e.png","alt":"Image","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在微信搜一搜中,數據寫入與在線檢索分離,寫入更類似Ceph,可以按邏輯分區進行劃分。這就允許我們在系統初始就選定一個較大的分區數,解決分片數難以確定問題。上圖展示了分片與節點的映射關係,當文檔寫入時,通過hash取餘的方式,打散到各分片中。新擴容節點3時,分片5從原來所屬的節點2遷移至節點3,通過分片遷移使得其在節點中的分佈依舊均衡。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"四、分佈式系統設計中的考量"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":"br"}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在需要劃分分片的數據系統中,一般都需要選出一個Leader來管理各個分片,這就涉及到選主問題。在數據的讀寫過程中,需要查找相應的分片,所以要管理路由信息。當節點故障時,需要通過遷移分片來重新分配數據,這就要求Leader能實時監控節點狀態。主分片與副分片之間通常需要複製數據,這又涉及一致性等問題。下面將會詳細闡述微信搜一搜中應對上述問題所作出的選型和考量。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1. 選主問題"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於比較複雜的協調或者事務場景,分佈式系統中通常會選出一個Leader來進行管理,"},{"type":"text","marks":[{"type":"strong"}],"text":"這主要是因爲單機的處理,遠比分佈式處理要簡單"},{"type":"text","text":"。分佈式中必須需要考慮的可靠、可信、亂序、延遲等問題,在單機中幾乎不存在。比如大名鼎鼎的共識算法Paxos,通常用來解決選主問題,這如果放到單機,將是不值一提的任務。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Leader的選舉通常有兩類方式:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"依賴ZK或etcd等協調服務系統:這是最爲常見的方式,其缺點就是需要多維護一套ZK系統。但相比帶來的複雜度,多數情況下,這個維護成本通常更願意被接受。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"自行選主:在無共享架構(shared nothing)系統中,爲了易用和維護性,系統會自行在節點間利用多數派來選主。這種方式常見於開源系統,比如ES、MongoDB和Ceph等。另外,在部分網絡系統(InfiniBand)中,爲了在網絡分區後,仍然能在兩個分區分別提供服務,也會自行實現選主。顯然,這種方式更爲複雜,在不同系統中自行選主的實現方式各有差異,異常和容錯方面的考量點也不盡相同,各有取捨。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在微信有相對成熟的自研chubby,維護成本比較低,所以在搜一搜中,我們選擇了更爲簡單的方式1。依賴chubby選出Leader後,由Leader來管理分片到節點的映射,尤其是上述再均衡的過程中的分片重分配。分片映射關係通過chubby進行持久化,只有在擴縮容時纔會進行變更。如果Leader故障,follower通過chubby搶鎖重新選主,新Leader接管分片映射後提供服務。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2. 在線檢索"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在檢索時,用戶請求需發送給全部的分片,分別進行召回,召回的結果在合併後返回。爲了提升在線吞吐,每個分片需要增加多個副本,所有副本均提供檢索服務。在分片和副本的管理中,一個常見的做法是將不同主分片和副分片均勻且分散的分到不同節點,通過多機併發提升在線性能,在ES、Ceph、Redis等系統中,均採用該方式。但在商業場景下,用戶請求量變化波動會非常大,比如表情搜索在節假日的請求量往往會上漲好幾倍。在上述分片劃分方式下,這種請求量大幅波動的場景會導致一個問題:"},{"type":"text","marks":[{"type":"italic"}],"text":"當請求量突然上漲時,需要同比增加副分片數,但這時擴容節點後,如果還需要做到主副分片均勻且分散的分佈的話,就需要遷移相應分片到新節點,而遷移本身對資源消耗比較大,又會影響到在線性能"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"應對上述的請求大幅波動,微信內普遍採用了Svrkit框架。Svrkit框架是一種非常經典的微服務架構,系統按模塊來劃分,每個模塊都是一個服務。同一模塊會在多個節點部署進程,不同節點互爲鏡像。請求量上漲時,迅速擴容節點,通過部署更多鏡像來應對。如果節點異常導致請求失敗,上游通過換機重試來避免最終失敗,從而保證可用性。但對Searcher來說,索引量比較大時,單個鏡像中不能裝載全部索引,這就需要將索引拆分到不同節點。在Svrkit中提供了一種byset模式,允許同一模塊劃分多個分組(Set),各自加載一部分索引。每個分組都有各自的多個鏡像提供服務,上游在下發請求時,需要從所有分組進行召回,合併返回。如果遇到上述的請求量上漲時,每個分組各自擴容鏡像即可。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/da\/da2be4bf2c7798b4b894eacd498894eb.png","alt":"Image","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如圖所示,在線檢索的Searcher模塊,採用了byset模式,劃分成多個分組。"},{"type":"text","marks":[{"type":"strong"}],"text":"上述分片到節點的映射,也相應的變成了分片到分組的映射,映射的管理由Leader來負責"},{"type":"text","text":"。當文檔量上漲時,通過擴容分組來容納;請求量上漲時,各組分別擴容,增加節點數來應對。得益於離線建索引的架構,新擴容的節點只需要從離線拉取數據,整個過程不影響現有服務。在擴縮分組時,部分分片要遷移到新分組中,這時需要注意的是隻有在新分組上線提供服務後,才能下線舊分組中的已遷移分片。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在ES中,主分片會均勻分散到各節點,這時Leader還需要同時管理請求路由。"},{"type":"text","marks":[{"type":"strong"}],"text":"而在byset中,路由按分組劃分,整個檢索過程中,Leader並不參與"},{"type":"text","text":",是什麼原因使得這裏可以做到如此簡潔呢?天下沒有免費的午餐,這裏的簡化也不例外。ES中,如果有Searcher的節點數據無法同步時,會通過Leader從路由中剔除該節點,所以不會造成數據缺失。但在Svrkit的byset路由中,Leader並未參與,如果有Searcher節點的數據異常,則無法通過路由的方式及時剔除異常節點。這類數據缺失的代價可謂不菲,能否有其他方式減少該問題的發生呢?如果異常節點與Leader之間的通信正常,Leader可以通知該異常Searcher拒絕服務,由上游重試到其他節點來保證正確召回。但如果網絡異常導致通信失敗,Searcher無法知道自己數據不完整時,這裏就會出現上述數據缺失問題了。所以這裏的簡化其實隱含的一個假設:如果Leader與某Searcher通信中斷,則客戶端也無法訪問該Searcher節點。在同一數據中心的局域網內,通過交換機堆疊等措施,可以做到全鏈路無網絡單點設備,減少這種網絡分區風險。這種場景下,該假設不成立的概率其實非常低,遠小於人工操作失誤和軟件bug帶來的問題。其實Svrkit框架下的byset路由模塊都隱含了該假設,最常用的KV系統就依賴byset路由,其穩定性已經過了實踐檢驗,所以當前場景下做出該假設是可行的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3. 文檔寫入"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"文檔寫入後,首先需要存儲,就涉及用共享存儲(shared disk)架構,還是無共享架構(shared nothing)的問題。這個決定不難做出,在微信中已經有自研的WFS(類似HDFS)、WBT(類似Hbase)和WQ(類似Kafka)已被廣泛使用。顯然,用共享存儲能極大簡化工作,實際上在商業搜索中,幾乎都依賴了其他存儲組件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/5a\/5ad638345c8b9ab711f078c067a5e776.png","alt":"Image","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"由於分片數固定,哈希方式已約定,所以文檔在寫入時,可以提前計算出其所在的分片,按分片寫入依賴WBT和WQ的數據平臺"},{"type":"text","text":"。在建索引時,Processor模塊從數據平臺掃描文檔,在預處理完成後返回給Indexer,Indexer負責索引建立,並落地到WFS。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4. 節點管理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在線Searcher模塊中不同的分組,需要加載不同分片的數據及控制上線順序;Indexer的不同的節點,需分別負責不同分片的索引建立;在實時流中,Processor會提前按分組聚合分片,所以也需要感知分片到分組的映射。基於以上原因,"},{"type":"text","marks":[{"type":"strong"}],"text":"Leader需要感知各個模塊中節點的詳細狀態,在擴縮容或節點故障時,及時作出調整"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"常用的節點發現方式是依賴ZK,通過目錄監聽來實現,這也是ZK作爲服務協調者主要用法之一。如果在搜索引擎中採用ZK的方案,在監控和與其他模塊交互等方面的工作要多很多,所以並不可取。微信的SvrKit框架中,會在所有節點部署相同的路由配置文件來實現模塊路由,路由變更由運維人員操作,需全局更新配置文件。這裏,Leader可以從路由配置中查找到所有正在提供服務的工作節點信息,如果能依賴路由配置,Leader發現節點的過程就變的很簡單了,新節點加入時通過路由文件就可以找到對應的Leader。但單純依賴路由配置還有兩個問題:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"工作節點當前的狀態無法被及時感知,比如節點正在啓動,磁盤故障等。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"在擴縮容時,新擴Searcher節點只有正常提供服務後,配置才能被重新下發給Leader,但新節點在提供服務前就需要知道分片信息,以便進行數據同步。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Leader如果需要感知工作節點的當前狀態,一個常見的做法就是通過心跳。工作節點定期通過心跳給Leader上報自身的情況,Leader將工作節點所需的分片映射、索引任務等信息帶回給工作節點。如果結合路由配置和心跳,這裏是否能解決上面的問題呢?針對問題1,心跳可以攜帶節點信息,包括啓動、異常等狀態供Leader決策。針對問題2,即使節點不在路由中,Leader也可以在心跳中將加載索引任務帶回給Searcher節點,新節點完成數據加載後,提供在線服務。所以,"},{"type":"text","marks":[{"type":"strong"}],"text":"這裏結合路由配置和心跳的方式是可行的"},{"type":"text","text":"。不過心跳也有失效的可能,利用心跳來檢測節點狀態本身並不完全可靠。比如在工作節點的心跳處理線程有死鎖、掛死、CPU繁忙等異常時,可能會有誤檢;在異常網絡時,比如大包比小包更易丟失的場景下,會導致漏檢,利用心跳的方式來收集信息,也就意味着需要能容忍上述各類異常。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/2e\/2e4aa022e816558c7ce5252603eba894.webp","alt":"Image","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上圖展示了Leader利用路由和心跳來收集Searcher和Indexer中各進程狀態的過程。通過心跳,Leader能感知各進程當前狀態,並利用路由配置來判斷是否爲新擴容節點等信息。Leader在心跳包的回執中,同步給Indexer下發創建索引任務,給Searcher下發相應的加載索引任務。感知節點狀態還允許Leader及時處理節點故障,比如在Indexer故障時,Leader會通過心跳超時檢測到,這時需回收給其分配的索引任務,換Indexer重做。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5. 事務、一致性和數據複製"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"事務是數據庫中的概念,通常稱作符合ACID要求。由於ACID過於苛刻,在單機場景下利用鎖等方式尚可實現,但在分佈式場景下就非常難了。目前各數據庫的分佈式實現都是弱化後的ACDI。搜索系統中的數據流,一般都不涉及事務,但各類操控類的操作,比如擴容、縮容、回滾等都有一定的事務要求。不過控制類的操作,幾乎都是非常低頻的操作,其本身不涉及性能問題,所以經常在Leader或Master中以單機的方式執行。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"存儲界的一位架構師大牛曾經總結過一條非常實用的經驗:"},{"type":"text","marks":[{"type":"strong"}],"text":"控制流一定要跟數據流分離"},{"type":"text","text":"。這裏的主要原因是二者的需求不同:控制操作通常由運維人員發起,非常低頻,允許失敗後重試,但對事務性有一定的要求;而數據流往往對性能或可靠性的要求更高,但相應會在其他方面做一些折讓,通常是在一致性及可用性上有條件的降低要求。在部分要求強一致性的系統中,會在節點故障時臨時犧牲可用性,Leader變更路由後才恢復。"},{"type":"text","marks":[{"type":"strong"}],"text":"將複雜控制邏輯剝離的做法通常使得數據流更可靠,比如Chubby或Leader故障導致短期無Leader的情況下,並不影響數據流的正常執行"},{"type":"text","text":"。ClickHouse是控制流分離的一個反例,其寫操作需要經過ZK傳遞,大大限制寫性能。不過作爲OLAP中的佼佼者,其更關注在線查詢性能,而對寫操作有更高的容忍度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在分佈式中,另一個經常被提及的問題是數據複製。在單數據中心,業界普遍採用的是單主節點複製,比如ES、Ceph、Redis等都是該方式。在主分片和副分片的數據同步時,多數系統採用了同步複製的方式來保證一致性。不同業務在一致性方面的需求不同,這就衍生出很多讓人眼花繚亂的名詞:最終一致性、因果一致性、讀寫一致性、會話一致性、單調一致性等等。這種折讓雖然爲業務帶來靈活性,但也加劇了分佈式系統的難度。在一致性上折讓最大的系統莫過於Redis集羣了,其爲了性能直接採用了異步複製,相當於放棄了一致性保證,這是使用者所詬病的一個點。與單主複製對應的是多主複製,主要用於超大型、跨數據中心時的複製,通常採用異步的方式。多主複製只在幾個有超大型數據的商業帝國纔會用到,多數業務並不涉及,這裏暫不討論。最後一種複製方式是無主複製,該方式用的相對較少,最經典的是DynamoDB。無主複製中,多由客戶端對所有數據節點發起讀寫請求,根據Quorum多數派,來決定最新的值。這種方式在節點異常時,其實很難判斷數據順序,而且讀放大比較嚴重,所以並不流行。在搜一搜中,Searcher模塊同一分組內並無主節點,不同節點之間不會進行數據同步,而是從WFS中拉取。這種做法更接近無主複製,其索引上線(相當於寫入)由Leader控制,爲較低頻操作。"},{"type":"text","marks":[{"type":"strong"}],"text":"搜索業務通常對一致性的要求都非常寬鬆,一般只要求儘可能達到單調讀的一致性,這裏通過將同一用戶的請求路由到同一節點上來實現"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"6. 搜索引擎系統架構"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過對上述問題的權衡,搜一搜的分佈式架構演變爲如下模樣:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/df\/df6b640eaa35a10e900f5d0b0f01c9cd.png","alt":"Image","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Leader依賴Chubby選舉,爲整個搜索引擎的大腦,負責管理分片映射、節點狀態及路由。Searcher模塊提供了在線的召回服務,用戶在發起搜索時,通過broker將請求下發至Searcher的全部分組,對結果Merge後返回。整個搜索過程Leader並不參與,實現控制流和搜索數據流的分離。Leader通過心跳與Searcher中各節點進行交互,收集各個節點狀態,通知各節點加載相應索引數據,並利用路由配置識別非集羣節點和正在擴容中的節點。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"文檔數據寫入時,先通過hash取餘的方式確定所屬分片,按分片寫入數據平臺中的WBT(類似HBase)和WQ(類似Kafka)。這裏所選的分片數,一般遠大於Searcher的分組數,確保在擴容分組時依舊能均勻分佈。索引的創建、上線和退場的管理由Leader負責,Indexer依據Leader的指示,從Processor拉取文檔,創建索引,落地到wfs。由於搜索業務對一致性的要求比較寬鬆,Searcher中同分組的不同節點之間,並不進行索引同步,各節點各自從WFS拉取對應分組的索引進行加載。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"五、索引管理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":"br"}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"在大數據處理中,常見的架構有兩種:Lambda和Kappa"},{"type":"text","text":";在Lambda架構中,數據處理分爲兩部分:批處理和流式處理。而在Kappa架構中,只有流式處理,避免了在實時數據處理系統上再“粘”一個離線數據處理系統。這兩種架構其實各有優缺點,Lamda架構更穩定,但需要維護兩套系統,批處理和實時處理要保證一致比較困難。Kappa架構更易維護,但其數據邊界不明確,需要複雜的異常處理,有數據丟失風險。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在搜一搜場景中,我們對文檔的可靠性要求比較苛刻,尤其是賬號系統(公衆號等),數據丟失很容易引發相應產商的投訴。另外,部分特徵需要批量計算產出,這就有定期批量更新的需求,"},{"type":"text","marks":[{"type":"strong"}],"text":"所以這裏自然選用了Lamda架構"},{"type":"text","text":"。當新數據進來時,經由實時流進入搜索系統;當特徵定期更新時,則需等待批量索引重建才能更新到線上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/5f\/5f37ec2f0ce4a130e7565d16b78a6bcb.png","alt":"Image","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上圖爲剔除處理邏輯後的數據流示意圖,文檔通過WQ(類似Kafka)接入後,分別進入用於批量處理的WBT(類似HBase)和用於實時流的WQ。批量計算出的特徵,直接寫入WBT,通過定期全量重建索引的方式上線;新增、刪除或更新的文檔,流經實時流WQ,直接進入搜索系統。由於文檔異步接入且索引在離線建立,所以準確的講這裏應該叫近實時流。在ES中,作爲存儲系統,讀寫操作是實時的,但其提供的搜索服務也需要提前建索引,也屬於近實時的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1. 全量索引更新"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"全量索引重建爲定期任務,indexer從WBT掃描全部文檔重建索引,通過WFS推送至Searcher。由於Searcher提前劃分了分組,所以Indexer也需要按分組建索引,每次掃描時,只掃描對應分組的分片即可。對Searcher中的每個節點來說,每次召回相當於在索引中查找TopK的過程,如果每個節點只有一個索引,其檢索資源利用率是最高的,實際上多數商業搜索中也是這麼做的。但是,這也帶來一個問題:在索引更新時需要預留一倍的資源進行熱替換。爲了避免這種資源浪費,一種常用的方式是在對節點進行索引更新時,先停止服務,索引更新完成後重新上線該節點。如果業務數據足夠大,近實時流和全量索引屬於不同的Searcher模塊,再加上仔細選擇上線時機的話,停服對在線的影響其實可控,是較好的選擇。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在微信搜一搜場景中,引擎需要支持幾十上百業務,尤其是對文檔數較少的賬號系統來說,同時維護兩個Searcher模塊的運維成本比較高,所以依舊選擇了不停服的方案。但不停服的時候,如何避免索引替換時新舊兩份數據帶來的資源佔用呢?針對該問題,一個很自然的解決方案是"},{"type":"text","marks":[{"type":"strong"}],"text":"對節點內的索引數據進行切分,即Searcher節點內的索引切分爲多個庫,每個庫依次替換,這樣只需多預留一個庫的資源即可"},{"type":"text","text":"。爲了與實時流區分,這裏姑且稱作全量庫。這裏的一個難點是全量庫替換時,要求新庫能覆蓋舊庫的全部數據,以保證數據完整性。如果新舊庫包含相同的分片,則可解決該問題,所以"},{"type":"text","marks":[{"type":"strong"}],"text":"分片到分組的映射,又演化爲分片到全量庫的映射"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/1a\/1ac904cdf0d5e6cf561bc418b086b3d1.png","alt":"Image","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如上圖,分片會映射到不同全量庫中,新擴容分組時,全量庫的個數也相應增加。全量索引重建的請求由運維人員或定時器發起,作爲控制操作發送給Leader。Leader負責生成管理全量庫的建庫、加載、退場等任務,Indexer收到建庫任務後,拉取對應的分片數據,建庫完成後在WFS保存。Leader收到Indexer的建庫任務完成後,通知Searcher中對應分組的節點進行庫數據加載及下線對應的舊庫。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"索引的每次全量重建完都會形成一輪完整的索引,這類似於存儲系統中的快照"},{"type":"text","text":"。不過這裏並不“快”,建庫過程中的拉取數據並不是一個瞬時操作,所以在判斷其覆蓋的近實時流範圍時,只能按起始拉取時間來判斷。已完成的索引數據,會在WFS中保存多個輪次,這爲索引回滾提供了條件。如果當前輪次的數據異常,Leader支持運維人員選擇一輪已上過線的索引,進行快速回滾,來消除錯誤數據帶來的影響。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2. 近實時流更新"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近實時流的實現,通常要求對寫友好,所以這裏需要從大名鼎鼎的LSM(Log-Structured Merge-Tree)說起。猶如其名,"},{"type":"text","marks":[{"type":"strong"}],"text":"LSM最初確實是用於日誌文件系統的,其主要思想是:增量數據在內存中先排序,超過閾值時落地文件,文件是不可修改的,新的增量重新生成新文件,這就將數據隨機寫入變成了順序寫"},{"type":"text","text":"。但這同時也導致數據在多次更新時,會在不同文件中有一定的冗餘,這種冗餘在隨後的文件逐級合併時清除。LevelDB是最爲經典的LSM範例,其提供了按Key查詢的能力,鑑於其簡潔和優雅的代碼設計,已經成爲LSM學習標杆。在搜索引擎中,Lucene也符合LSM思想,與LevelDB不同的是,其在內存中的索引更復雜,並不是簡單按key排序,而是按倒排建立索引。另一個不同點是文件合併時的策略,LevelDB是按Level由小到大合併,而Lucene中是按文件大小合併。按文件大小合併策略相比更爲靈活、高效,採用該策略的另一個經典系統是HBase。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"LSM其實是通過犧牲部分讀性能,換取最大化的寫。這種方式也有相應的缺點:出於資源的限制,往往無法將數據合併到1個文件中,這也使得部分冗餘數據無法被消除。另外,在文件合併時,需要大量的IO和CPU資源,這會搶佔在線讀寫資源,帶來一定的性能波動。不過,以上問題在離線創建索引的搜索系統中並不存在:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"索引在離線創建,在建索引時並不太關注資源搶佔問題;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"由於有全量索引更新流程,這相當於數據重整過程。過舊的近實時流的文件會被覆蓋而下線,所以並不需要擔心數據冗餘問題。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然而,這裏還有一個問題沒有解決:LSM主要是爲單節點準備的,但Indexer爲無狀態模塊,不同的合併任務可能屬於不同節點,這裏還能適用麼?其實Indexer建完索引後,會在WFS中持久化,這裏只是將本地的IO變換成WFS的IO操作。"},{"type":"text","marks":[{"type":"strong"}],"text":"由於沒有讀操作,多節點分佈並無不妥,建庫任務由Leader統一管理,也免除了多機之間同步的煩惱"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/0d\/0d12d5ee152ad358f7cd1bcf7c2511af.webp","alt":"Image","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上圖爲某分組中近實時流庫的快照示意圖,其中下面的Refresh庫相當於LSM內存中累積的數據,Level庫類似LSM中落地後的文件。新增數據,首先會進入Refresh庫,只有Refresh庫的數據到達一定閾值,纔會轉換成Level_0的庫。如果數據寫入速度較低,Refresh庫在時間閾值(5秒)到期後也會落地上線,以便新數據能被及時檢索到;上圖中“庫91” 爲已上線(綠色表示)的Refresh庫,新的數據會進入“庫92”,”庫92”可以完全覆蓋“庫91”的數據,如果“庫92”中的數據達到閾值後,會轉換爲Level_0的“庫9”。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在Level庫中,由低向高合併,高Level的庫一旦上線(綠色表示),則會同步下線掉其已覆蓋的低Level庫(灰色表示)。如果忽略刪除操作帶來的波動,這裏每個Level中,不同的庫中文檔數幾乎一致,其大小也接近,所以不存在按大小還是按Level合併策略的選擇。整個近實時流是按照時間順序排列的,當全量索引重建完成並上線後,會同步下線其覆蓋的近實時流庫(紅色表示)。圖中黃色部分,表示正在建索引中的庫,比如近實時流“庫7|8”,正在對“庫7”和“庫8”進行合併重建。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通常情況下,LSM的合併都在每個分片中各自進行,比如Lucene就屬於ES的一個分片。但我們的場景下,分片數往往設置一個比較大的值,按分片管理將會給在線帶來非常多的庫,同時也給Leader帶來較大的壓力。"},{"type":"text","marks":[{"type":"strong"}],"text":"由於在線Searcher按分組來加載索引,這就爲分片聚合提供了可能"},{"type":"text","text":"。這裏採用了按分組管理的方式,即Indexer會拉取歸屬於某個分組的全部分片的增量數據來創建索引。索引完成後,由Leader通知對應分組的Searcher進行加載,完成上線。不過這裏也相應有一個缺點是,近實時流只能按分組被全量索引覆蓋下線時,不能按分片來進行,造成少量的數據冗餘。在系統開啓近實時流後,Leader會自動生成相應的任務,下發給Indexer,數據流並不經過Leader,整個過程也無需人工參與。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由於lambda架構有效的平衡了數據可靠性和時效性,爲多數商業系統所採用。但在搜一搜中,引擎需要支持幾十上百業務,這也放大了lamda架構的問題:每個業務都需要維護一個全量索引和近實時流兩套系統,維護成本比較高。即便有Leader來做任務管理,文檔預處理、模塊維護等仍需要各業務各自參與開發。"},{"type":"text","marks":[{"type":"strong"}],"text":"再加上Svrkit本身微服務的特性,更適用於RPC模式的流式處理,所以這裏在實現時更偏向於Kappa架構"},{"type":"text","text":"。默認情況下,用於預處理的Processor 和 負責索引建立的Indexer模塊並未區分全量和近實時流。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"在超大型搜索業務中,上述混合架構往往無法支撐,全量索引處理需要從流式處理中真正拆分,獨自進行批處理"},{"type":"text","text":"。在百億到千億文檔的大型Web搜索系統中,往往還需要進行冷熱數據分離。包括時新數據在內的熱數據,要求每次都能正常檢索,但冷數據由於排序靠後而得不到曝光,對響應時長和召回率的容忍度都要更高。與上述按文檔分片後的DAAT(document at a time)檢索模式不同,冷數據通常會採用成本更低的TAAT(term at a time)模式。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外,冷熱分離後,數據的冷熱遷移也是一個需要關注的點,往往根據業務需求來訂製。這類超大業務目前只在幾個商業巨頭中用到,已經超出本文的範圍和筆者的經驗,如有讀者對這部分感興趣,可以一起交流。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"六、結語"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":"br"}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文詳細闡述了微信搜一搜中索引管理的分佈式設計中的選型和取捨。其中涉及的多個分佈式經典問題,都是在數據系統的設計中要仔細權衡的。許多非常好的知名開源系統都可以給我們提供很多思路和經驗。另外,本文還闡述了在離線建索引架構下,索引管理過程中的選型和設計,這部分對採用讀寫分離架構的數據系統有較多的參考意義。由於選題比較大,限於筆者能力,錯誤在所難免,還望各位讀者不吝指出。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"horizontalrule"},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"頭圖:Unsplash"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"作者:白乾"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"原文:https:\/\/mp.weixin.qq.com\/s\/npfvrchu411KDsI6hQn-nw"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"原文:微信搜索引擎中索引的分佈式演進"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"來源:雲加社區 - 微信公衆號 [ID:QcloudCommunity]"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"轉載:著作權歸作者所有。商業轉載請聯繫作者獲得授權,非商業轉載請註明出處。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章