不懼流量持續上漲,BIGO 藉助 Flink 與 Pulsar 打造實時消息系統

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"BIGO 於 2014 年成立,是一家高速發展的科技公司。基於強大的音視頻處理技術、全球音視頻實時傳輸技術、人工智能技術、CDN 技術,BIGO 推出了一系列音視頻類社交及內容產品,包括 "},{"type":"link","attrs":{"href":"https:\/\/www.bigo.tv\/","title":null,"type":null},"content":[{"type":"text","text":"Bigo Live(直播)"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"和 "},{"type":"link","attrs":{"href":"https:\/\/likee.video\/en\/?lang=en","title":null,"type":null},"content":[{"type":"text","text":"Likee(短視頻)"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"等,在全球已擁有近 1 億用戶,產品及服務已覆蓋超過 150 個國家和地區。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"挑戰"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"最初,BIGO 的消息流平臺主要採用開源 Kafka 作爲數據支撐。隨着數據規模日益增長,產品不斷迭代,BIGO 消息流平臺承載的數據規模出現了成倍增長,下游的在線模型訓練、在線推薦、實時數據分析、實時數倉等業務對消息流平臺的實時性和穩定性提出了更高的要求。開源的 Kafka 集羣難以支撐海量數據處理場景,我們需要投入更多的人力去維護多個 Kafka 集羣,這樣成本會越來越高,主要體現在以下幾個方面:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"數據存儲和消息隊列服務綁定,集羣擴縮容\/分區均衡需要大量拷貝數據,造成集羣性能下降。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"當分區副本不處於 ISR(同步)狀態時,一旦有 broker 發生故障,可能會造成數據丟失或該分區無法提供讀寫服務。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"當 Kafka broker 磁盤故障\/空間佔用率過高時,需要進行人工干預。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"集羣跨區域同步使用 KMM(Kafka Mirror Maker),性能和穩定性難以達到預期。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在 catch-up 讀場景下,容易出現 PageCache 污染,造成讀寫性能下降。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Kafka broker 上存儲的 topic 分區數量有限,分區數越多,磁盤讀寫順序性越差,讀寫性能越低。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Kafka 集羣規模增長導致運維成本急劇增長,需要投入大量的人力進行日常運維;在 BIGO,擴容一臺機器到 Kafka 集羣並進行分區均衡,需要 0.5 人\/天;縮容一臺機器需要 1 人\/天。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"如果繼續使用 Kafka,成本會不斷上升:擴縮容機器、增加運維人力。同時,隨着業務規模增長,我們對消息系統有了更高的要求:系統要更穩定可靠、便於水平擴展、延遲低。爲了提高消息隊列的實時性、穩定性和可靠性,降低運維成本,我們開始考慮是否要基於開源 Kafka 做本地化二次開發,或者看看社區中有沒有更好的解決方案,來解決我們在維護 Kafka 集羣時遇到的問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"爲什麼選擇 Pulsar"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"2019 年 11 月,我們開始調研消息隊列,對比當前主流消息流平臺的優缺點,並跟我們的需求對接。在調研過程中,我們發現 Apache Pulsar 是下一代雲原生分佈式消息流平臺,集消息、存儲、輕量化函數式計算爲一體。Pulsar 能夠無縫擴容、延遲低、吞吐高,支持多租戶和跨地域複製。最重要的是,Pulsar 存儲、計算分離的架構能夠完美解決 Kafka 擴縮容的問題。Pulsar producer 把消息發送給 broker,broker 通過 bookie client 寫到第二層的存儲 BookKeeper 上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/d4\/d4bcd849de74fc987fe3d9fc44345ca3.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Pulsar 採用存儲、計算分離的分層架構設計,支持多租戶、持久化存儲、多機房跨區域數據複製,具有強一致性、高吞吐以及低延時的高可擴展流數據存儲特性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"水平擴容:能夠無縫擴容到成百上千個節點。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"高吞吐:已經在 Yahoo! 的生產環境中經受了考驗,支持每秒數百萬條消息的發佈-訂閱(Pub-Sub)。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"低延遲:在大規模的消息量下依然能夠保持低延遲(小於 5 ms)。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"持久化機制:Pulsar 的持久化機制構建在 Apache BookKeeper 上,實現了讀寫分離。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"讀寫分離:BookKeeper 的讀寫分離 IO 模型極大發揮了磁盤順序寫性能,對機械硬盤相對比較友好,單臺 bookie 節點支撐的 topic 數不受限制。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"爲了進一步加深對 Apache Pulsar 的理解,衡量 Pulsar 能否真正滿足我們生產環境大規模消息 Pub-Sub 的需求,我們從 2019 年 12 月開始進行了一系列壓測工作。由於我們使用的是機械硬盤,沒有 SSD,在壓測過程中遇到了一些性能問題,在 StreamNative 的協助下,我們分別對 "},{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s\/mJViU-elhBwHMDiius2b8g","title":null,"type":null},"content":[{"type":"text","text":"Broker"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"和 "},{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s\/f0vL6gdFJIjNwsfZ3BXePA","title":null,"type":null},"content":[{"type":"text","text":"BookKeeper"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 進行了一系列的"},{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s\/mJViU-elhBwHMDiius2b8g","title":null,"type":null},"content":[{"type":"text","text":"性能調優"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",Pulsar 的吞吐和穩定性均有所提高。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"經過 3~4 個月的壓測和調優,我們認爲 Pulsar 完全能夠解決我們使用 Kafka 時遇到的各種問題,並於 2020 年 4 月在測試環境上線 Pulsar。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Apache Pulsar at BIGO:Pub-Sub 消費模式"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"2020 年 5 月,我們正式在生產環境中使用 Pulsar 集羣。Pulsar 在 BIGO 的場景主要是 Pub-Sub 的經典生產消費模式,前端有 Baina 服務(用 C++ 實現的數據接收服務),Kafka 的 Mirror Maker 和 Flink,以及其他語言如 Java、Python、C++ 等客戶端的 producer 向 topic 寫入數據。後端由 Flink 和 Flink SQL,以及其他語言的客戶端的 consumer 消費數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/2b\/2b9fda40d9423d802cdf8c738b2cc9b1.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在下游,我們對接的業務場景有實時數倉、實時 ETL(Extract-Transform-Load,將數據從來源端經過抽取(extract)、轉換(transform)、加載(load)至目的端的過程)、實時數據分析和實時推薦。大部分業務場景使用 Flink 消費 Pulsar topic 中的數據,並進行業務邏輯處理;其他業務場景消費使用的客戶端語言主要分佈在 C++、Go、Python 等。數據經過各自業務邏輯處理後,最終會寫入 Hive、Pulsar topic 以及 ClickHouse、HDFS、Redis 等第三方存儲服務。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/3e\/3e34ae7c79b6ebfbe7153ab95a920e1e.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Pulsar + Flink 實時流平臺"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在 BIGO,我們藉助 Flink 和 Pulsar 打造了實時流平臺。在介紹這個平臺之前,我們先了解下 Pulsar Flink Connecto"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#3c4043","name":"user"}}],"text":"r "},{"type":"text","text":"的內部運行機理。在 Pulsar Flink Source\/Sink API 中,上游有一個 Pulsar topic,中間是 Flink job,下游有一個 Pulsar topic。我們怎麼消費這個 topic,又怎樣處理數據並寫入 Pulsar topic 呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/cc\/cc35ec53fc33c36f09534db123228353.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"按照上圖左側代碼示例,初始化一個 StreamExecutionEnvironment,進行相關配置,比如修改 property、topic 值。然後創建一個 FlinkPulsarSource 對象,這個 Source 裏面填上 serviceUrl(brokerlist)、adminUrl(admin 地址)以及 topic 數據的序列化方式,最終會把 property 傳進去,這樣就能夠讀取 Pulsar topic 中的數據。Sink 的使用方法非常簡單,首先創建一個 FlinkPulsarSink,Sink 裏面指定 target topic,再指定 TopicKeyExtractor 作爲 key,並調用 addsink,把數據寫入 Sink。這個生產消費模型很簡單,和 Kafka 很像。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Pulsar topic 和 Flink 的消費如何聯動呢?如下圖所示,新建 FlinkPulsarSource 時,會爲 topic 的每一個分區新創建一個 reader 對象。要注意的是 Pulsar Flink Connector 底層使用 reader API 消費,會先創建一個 reader,這個 reader 使用 Pulsar Non-Durable Cursor。Reader 消費的特點是讀取一條數據後馬上提交(commit),所以在監控上可能會看到 reader 對應的 subscription 沒有 backlog 信息。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在 Pulsar 2.4.2 版本中,由 Non-Durable Cursor 訂閱的 topic,在接收到 producer 寫入的數據時,不會將數據保存在 broker 的 cache 中,導致大量數據讀取請求落到 BookKeeper 中,降低數據讀取效率。BIGO 在 Pulsar 2.5.1 版本中修正了這個問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/51\/513ce0eb24e8f5053e06a9bf8d1e8d9e.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Reader 訂閱 Pulsar topic 後,消費 Pulsar topic 中的數據,Flink 如何保證 exactly-once 呢?Pulsar Flink Connector 使用另外一個獨立的 subscription,這個 subscription 使用的是 Durable Cursor。當 Flink 觸發 checkpoint,Pulsar Flink Connector 會把 reader 的狀態(包括每個 Pulsar Topic Partition 的消費位置) checkpoint 到文件、內存或 RocksDB 中,當 checkpoint 完成後,會發布一次 Notify Checkpoint Complete 通知。Pulsar Flink Connector 收到 checkpoint 完成通知後,把當前所有 reader 的消費 Offset,即 message id 以獨立的 SubscriptionName 提交給 Pulsar broker,此時纔會把消費 Offset 信息真正記錄下來。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Offset Commit 完成後,Pulsar broker 會將 Offset 信息(在 Pulsar 中以 Cursor 表示)存儲到底層的分佈式存儲系統 BookKeeper 中,這樣做的好處是當 Flink 任務重啓後,會有兩層恢復保障。第一種情況是從 checkpoint 恢復:可以直接從 checkpoint 裏獲得上一次消費的 message id,通過這個 message id 獲取數據,這個數據流就能繼續消費。如果沒有從 checkpoint 恢復,Flink 任務重啓後,會根據 SubscriptionName 從 Pulsar 中獲取上一次 Commit 對應的 Offset 位置開始消費。這樣就能有效防止 checkpoint 損壞導致整個 Flink 任務無法成功啓動的問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Checkpoint 流程如下圖所示。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/2a\/2ac879d8b6139959175bda5338a942f4.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"先做 checkpoint N,完成後發佈一次 notify Checkpoint Complete,等待一定時間間隔後,接下來做 checkpoint N+1,完成後也會進行一次 notify Checkpoint Complete 操作,此時把 Durable Cursor 進行一次 Commit,最終 Commit 到 Pulsar topic 的服務端上,這樣能確保 checkpoint 的 exactly-once,也能根據自己設定的 subscription 保證 message “keep alive”。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Topic\/Partition Discovery 要解決什麼問題呢?當 Flink 任務消費 topic 時,如果 Topic 增加分區,Flink 任務需要能夠自動發現分區。Pulsar Flink Connector 如何實現這一點呢?訂閱 topic 分區的 reader 之間相互獨立,每個 task manager 包含多個 reader thread,根據哈希函數把單個 task manager 中包含的 topic 分區映射過來,topic 中新增分區時,新加入的分區會映射到某個 task manager 上,task manager 發現新增分區後,會創建一個 reader,消費掉新數據。用戶可以通過設置 `partition.discovery.interval-millis` 參數,調配檢測頻率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/49\/491f07230eea7908a23c13c375872311.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"爲了降低 Flink 消費 Pulsar topic 的門檻,讓 Pulsar Flink Connector 支持更加豐富的 Flink 新特性,BIGO 消息隊列團隊爲 Pulsar Flink Connector 增加了 Pulsar Flink SQL DDL(Data Definition Language,數據定義語言) 和 Flink 1.11 支持。此前官方提供的 Pulsar Flink SQL 只支持 Catalog,要想通過 DDL 形式消費、處理 Pulsar topic 中的數據不太方便。在 BIGO 場景中,大部分 topic 數據都以 JSON 格式存儲,而 JSON 的 schema 沒有提前註冊,所以只能在 Flink SQL 中指定 topic 的 DDL 後纔可以消費。針對這種場景,BIGO 基於 Pulsar Flink Connector 做了二次開發,提供了通過 Pulsar Flink SQL DDL 形式消費、解析、處理 Pulsar topic 數據的代碼框架(如下圖所示)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/08\/086d5424379a0cea851b680427913205.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"左邊的代碼中,第一步是配置 Pulsar topic 的消費,首先指定 topic 的 DDL 形式,比如 rip、rtime、uid 等,下面是消費 Pulsar topic 的基礎配置,比如 topic 名稱、service-url、admin-url 等。底層 reader 讀到消息後,會根據 DDL 解出消息,將數據存儲在 test_flink_sql 表中。第二步是常規邏輯處理(如對錶進行字段抽取、做 join 等),得出相關統計信息或其他相關結果後,返回這些結果,寫到 HDFS 或其他系統上等。第三步,提取相應字段,將其插入一張 hive 表。由於 Flink 1.11 對 hive 的寫入支持比 1.9.1 更加優秀,所以 BIGO 又做了一次 API 兼容和版本升級,使 Pulsar Flink Connector 支持 Flink 1.11。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"BIGO 基於 Pulsar 和 Flink 構建的實時流平臺主要用於實時 ETL 處理場景和 AB-test 場景。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時 ETL 處理場景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"實時 ETL 處理場景主要運用 Pulsar Flink Source 及 Pulsar Flink Sink。這個場景中,Pulsar topic 實現幾百甚至上千個 topic,每個 topic 都有獨立的 schema。我們需要對成百上千個 topic 進行常規處理,如字段轉換、容錯處理、寫入 HDFS 等。每個 topic 都對應 HDFS 上的一張表,成百上千個 topic 會在 HDFS 上映射成百上千張表,每張表的字段都不一樣,這就是我們遇到的實時 ETL 場景。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"這種場景的難點在於 topic 數量多。如果每個 topic 維護一個 Flink 任務,維護成本太高。之前我們想通過 HDFS Sink Connector 把 Pulsar topic 中的數據直接 sink 到 HDFS 上,但處理裏面的邏輯卻很麻煩。最終我們決定使用一個或多個 Flink 任務去消費成百上千個 topic,每個 topic 配自己的 schema,直接用 reader 來訂閱所有 topic,進行 schema 解析後處理,將處理後的數據寫到 HDFS 上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"隨着程序運行,我們發現這種方案也存在問題:算子之間壓力不均衡。因爲有些 topic 流量大,有些流量小,如果完全通過隨機哈希的方式映射到對應的 task manager 上去,有些 task manager 處理的流量會很高,而有些 task manager 處理的流量很低,導致有些 task 機器上積塞非常嚴重,拖慢 Flink 流的處理。所以我們引入了 slot group 概念,根據每個 topic 的流量情況進行分組,流量會映射到 topic 的分區數,在創建 topic 分區時也以流量爲依據,如果流量很高,就多爲 topic 創建分區,反之少一些。分組時,把流量小的 topic 分到一個 group 中,把流量大的 topic 單獨放在一個 group 中,很好地隔離了資源,保證 task manager 總體上流量均衡。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/5b\/5b2e822a238872dbd8766ba8480ca4d2.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"AB-test 場景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"實時數倉需要提供小時表或天表爲數據分析師及推薦算法工程師提供數據查詢服務,簡單來講就是 app 應用中會有很多打點,各種類型的打點會上報到服務端。如果直接暴露原始打點給業務方,不同的業務使用方就需要訪問各種不同的原始表從不同維度進行數據抽取,並在表之間進行關聯計算。頻繁對底層基礎表進行數據抽取和關聯操作會嚴重浪費計算資源,所以我們提前從基礎表中抽取用戶關心的維度,將多個打點合併在一起,構成一張或多張寬表,覆蓋上面推薦相關的或數據分析相關的 80% ~ 90% 場景任務。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在實時數倉場景下還需實時中間表,我們的解決方案是,針對 topic A 到 topic K ,我們使用 Pulsar Flink SQL 將消費到的數據解析成相應的表。通常情況下,將多張表聚合成一張表的常用做法是使用 join,如把表 A 到 K 按照 uid 進行 join 操作,形成非常寬的寬表;但在 Flink SQL 中 join 多張寬表效率較低。所以 BIGO 使用 union 來替代 join,做成很寬的視圖,以小時爲單位返回視圖,寫入 ClickHouse,提供給下游的業務方實時查詢。使用 union 來替代 join 加速表的聚合,能夠把小時級別的中間表產出控制在分鐘級別。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/8f\/8fb55f830e2be385742732903741deef.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"輸出天表可能還需要 join 存放在 hive 上的表或其他存儲介質上的離線表,即流表和離線表之間join 的問題。如果直接 join,checkpoint 中需要存儲的中間狀態會比較大,所以我們在另外一個維度上做了優化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"左側部分類似於小時表,每個 topic 使用 Pulsar Flink SQL 消費並轉換成對應的表,表之間進行 union 操作,將 union 得到的表以天爲單位輸入到 HBase(此處引入 HBase 是爲了做替代它的 join)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"右側需要 join 離線數據,使用 Spark 聚合離線的 Hive 表(如表 a1、a2、a3),聚合後的數據會通過精心設計的 row-key 寫入 HBase 中。數據聚合後狀態如下:假設左邊數據的 key 填了寬表的前 80 列,後面 Spark 任務算出的數據對應同樣一個 key,填上寬表的後 20 列,在 HBase 中組成一張很大的寬表,把最終數據再次從 HBase 抽出,寫入 ClickHouse,供上層用戶查詢,這就是 AB-test 的主體架構。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/8e\/8ede915bd9bd130e484cc0154d1e046a.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"業務收益"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"從 2020 年 5 月上線至今,Pulsar 運行穩定,日均處理消息數百億,字節入流量爲 2~3 GB\/s。Apache Pulsar 提供的高吞吐、低延遲、高可靠性等特性極大提高了 BIGO 消息處理能力,降低了消息隊列運維成本,節約了近 50% 的硬件成本。目前,我們在幾十臺物理主機上部署了上百個 Pulsar broker 和 bookie 進程,採用 bookie 和 broker 在同一個節點的混部模式,已經把 ETL 從 Kafka 遷移到 Pulsar,並逐步將生產環境中消費 Kafka 集羣的業務(比如 Flink、Flink SQL、ClickHouse 等)遷移到 Pulsar 上。隨着更多業務的遷移,Pulsar 上的流量會持續上漲。漲。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我們的 ETL 任務有一萬多個 topic,每個 topic 平均有 3 個分區,使用 3 副本的存儲策略。之前使用 Kafka,隨着分區數增加,磁盤由順序讀寫逐漸退化爲隨機讀寫,讀寫性能退化嚴重。Apache Pulsar 的存儲分層設計能夠輕鬆支持百萬 topic,爲我們的 ETL 場景提供了優雅支持。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"未來展望"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"BIGO 在 Pulsar broker 負載均衡、broker cache 命中率優化、broker 相關監控、BookKeeper 讀寫性能優、BookKeeper 磁盤 IO 性能優化、Pulsar 與 Flink、Pulsar 與 Flink SQL 結合等方面做了大量工作,提升了 Pulsar 的穩定性和吞吐,也降低了 Flink 與 Pulsar 結合的門檻,爲 Pulsar 的推廣打下了堅實基礎。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"未來,我們會增加 Pulsar 在 BIGO 的場景應用,幫助社區進一步優化、完善 Pulsar 功能,具體如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"爲 Apache Pulsar 研發新特性,比如支持 topic policy 相關特性。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"遷移更多任務到 Pulsar。這項工作涉及兩方面,一是遷移之前使用 Kafka 的任務到 Pulsar。二是新業務直接接入 Pulsar。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"BIGO 準備使用 KoP 來保證數據遷移平滑過渡。因爲 BIGO 有大量消費 Kafka 集羣的 Flink 任務,我們希望能夠直接在 Pulsar 中做一層 KoP,簡化遷移流程。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"對 Pulsar 及 BookKeeper 持續進行性能優化。由於生產環境中流量較高,BIGO 對系統的可靠性和穩定性要求較高。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"持續優化 BookKeeper 的 IO 協議棧。Pulsar 的底層存儲本身是 IO 密集型系統,保證底層 IO 高吞吐,才能夠提升上層吞吐,保證性能穩定。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"作者簡介:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"陳航,Apache Pulsar Committer,BIGO 大數據消息平臺團隊負責人,負責創建與開發承載大規模服務與應用的集中發佈-訂閱消息平臺。他將 Apache Pulsar 引入到 BIGO 消息平臺,並打通上下游系統,如 Flink、ClickHouse 和其他實時推薦與分析系統。他目前主要負責 Pulsar 性能調優、新功能開發及 Pulsar 生態集成。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章