支持頻繁更新、即席查詢:ClickHouse在愛奇藝視頻生產的應用

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"衆所周知,愛奇藝擁有海量視頻,在視頻生產過程中產生的上千QPS的實時數據、T級別的數據存儲。要支持這樣的數據進行即席查詢和多個大表的JOIN,是愛奇藝視頻生產團隊大數據應用的難點。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"具體來說有以下幾點:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)實時性的要求,需要實時的解決方案。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)生產數據更新頻繁,OLAP 需支持更新。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)生產需要大表 Join 方案。碼流屬性(億級,百G)和節目屬性(億級,百G)經常放在一起做分析。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,愛奇藝視頻生產數據還有一個特點,數據來源於OLTP 數據中臺,其數據持久化在 Mongo,消息變動寫入 Kafka, Kafka中:curData 是當前更新數據,oriData是歷史爲變動數據,這樣的結構化數據爲配置化開發提供了可能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"愛奇藝視頻生產團隊負責愛奇藝的視頻生產,涵蓋“素材、成片、運營流、圖片”各個方面,並圍繞生產進行了中臺化建設、監控建設、數據報表建設等,旨在爲視頻生產提效,節省編輯精力,更快更好的產出優質視頻。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"針對以上痛點,愛奇藝視頻生產團隊進行了一系列努力。本文將詳細敘述ClickHouse在愛奇藝視頻生產實時數倉的應用:包括業務數據是如何通過 Spark \/ Spark Streaming 計算引擎處理,並將 HBase 作爲維表數據存儲,進行實時Join,最終寫入ClickHouse,實現即席查詢的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最終的建設成果也比較顯著,原本報表開發週期由天級縮短到小時級,滿足了頻繁更新的實時、離線可 Join 的報表需求。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"背景及發展歷史"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"選擇Spark+ClickHouse實時數倉建設方案,基於愛奇藝視頻生產的歷史發展階段及數據特點。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨着各種大數據技術蓬勃發展,愛奇藝視頻生產的數據業務經歷了兩個階段。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"早期階段一"},{"type":"text","text":":團隊基於公司內部 BabelX 離線數據同步工具,引入 Hive 技術,來做報表開發。"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/6c\/6c7e6394b77956338595dabab6f2ba90.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在階段一中,缺點是每天跑全量數據,成本高,實時性低,修改緯度字段時,整條鏈路都要修改;ETL 完全依賴 Hive 內置函數,可複用性低,運維成本高。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"早期階段二"},{"type":"text","text":":隨着生產數據增多,Mysql 提供的可視化查詢性能遇到瓶頸,且實效性要求提高,數據報表進入了第二階段,引進 ClickHouse 進行實時報表開發。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在引進clickHouse的過程中,我們也研究了業界如druid、kudu等其他方案,結論是:Druid、kudu在用戶視頻數少,時間跨度大的情況下,性能表現還不錯;當用戶視頻數超過1千萬後,Druid會受聚合影響,速度大幅度降低,甚至會出現超時的情況。最終我們選擇了clickHouse,通過它的引擎的選擇,我們還支持了頻繁的數據更新。"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/eb\/ebb8fa314a40f27337de5e22adea54dc.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這個階段其缺點是:不支持連表操作,業務庫僅支持 JDBC\/ODBC 類型,Merge引擎不支持更新,Mysql導入 ClickHouse再Truncate,期間數據存在丟失。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在此基礎上,我們完善系統,最終形成了如下的新的架構體系。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Spark+ClickHouse實時數倉"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"話不多說,先上架構圖"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/d7\/d71c619077e7f91162f20878f62f2aa5.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"整體結構"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ClickHouse 是面向列的數據庫管理系統(DBMS),用於對查詢進行聯機分析處理(OLAP)。由俄羅斯IT公司 Yandex 爲 Yandex.Metrica 網絡分析服務開發的。允許分析實時更新的數據,該系統以高性能爲目標,且儲存明細數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Spark 是用於大規模數據處理的統一分析引擎,高效的支撐更多計算模式,包括交互式查詢和流處理。一個主要特點是能夠在內存中進行計算,即使依賴磁盤進行復雜的運算,Spark依然比MapReduce更加高效。Spark Streaming 是核心 Spark API 的擴展,可實現實時數據流的可伸縮,高吞吐量,容錯流處理。其基於微批,和其他基於“一次處理一條記錄” 架構的系統相比, 它的延遲會相對高一些,但是吞吐量也會有一定優勢。而批量插入 ClickHouse,又是 ClickHouse 所推崇的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"結合 Spark\/Spark Streaming 與 ClickHouse 的特性,這一方案優勢也就顯而易見了:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ClickHouse 支持更新且速度極快;Spark Streaming 微批,更適合寫入clickHouse。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"具體建設過程主要分爲三個部分。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"離線數據加工"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先通過 Spark計算引擎,將 mongo 數據例行全量導入 Hive(擔心業務庫穩定性)。然後通過 Spark 計算引擎, 將 Hive 數據例行進行 ETL 處理,並離線導入 ClickHouse。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時數據加工"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"歷史存量數據的處理是通過 Spark 計算引擎,將 Mongo 數據寫入 ClickHouse(只執行一次,可以直接從業務庫導。因爲例行導入 Hive 表本身就是我們在做)。實時數據的處理就是Spark技術引擎直接處理 Kafka 消息寫入 ClickHouse 了。如果不需要歷史存量數據,只需要消費 Kafka,實時計算導入 ClickHouse 就可以了。具體實時架構如下:"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/a5\/a5a1b804d3d1208359c1de2c35eeb11c.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"實時方案流程圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這裏離線數據和實時數據連接點需要注意一下:ReplacingMergeTree 引擎由於冪等性質,可將 Kafka offset 向前多重置一些,保證最少一次。其他引擎存在誤差數據。除非 Kafka 能夠重放 Mongo 中歷史所有數據。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"Join需求"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"存在 Join 需求時,由於兩個表目前都是百G的存儲,使用Redis、CB內存太浪費,我們最終選擇了使用HBase。以 HBase 作爲緯度表,在 Spark 計算引擎中,進行合併處理,並寫入事實表。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/aa\/aa79749bdb1c52311e954b9ca8ef52d6.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"大表Join方案流程圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除了以上工作,這裏有一些注意事項:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 實時導入 ClickHouse,維表數據必須早於事實表產生。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. 增量離線同步或者實時同步 ClickHouse 時,需保證 維表數據基本不變 或者 維表數據變化後,實時、離線增量數據也會發生變化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3. 否則維表變化不會在 ClickHouse 輸出表中體現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"看到這裏,整體架構已經很清晰了。那麼如何選擇 ClickHouse引擎來支持頻繁更新呢?"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"ClickHouse支持頻繁更新"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"針對頻繁更新請求,ClickHouse 可以選擇 ReplacingMergeTree 和 VersionedCollapsingMergeTree 引擎:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"ReplacingMergeTree(覆蓋更新)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以 id 作爲主鍵,會刪除相同的重複項。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不保證沒有重複的數據出現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"VersionedCollapsingMergeTree(摺疊更新)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在數據塊合併算法中添加了摺疊行邏輯。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"針對離線數據,有兩種選擇方案。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"方案一"},{"type":"text","text":"是用 ReplacingMergeTree 引擎的增量同步方案:先用 Spark 計算引擎將 Mongo 數據例行同步到 Hive,再用 Spark 計算引擎消費 Hive 增量數據寫入 ClickHouse。其優點是增量同步,壓力小。缺點是 Join 時,增量離線同步,需保證 維表數據基本不變 或者 維表數據變化後,實時表數據也會發生變化。否則維表變化不會再事實表中體現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"方案二"},{"type":"text","text":"是用 MergeTree 引擎的全量同步方案:先用 Spark 計算引擎將 Mongo 數據定時同步到 Hive,然後Truncate ClickHouse 表,最後使用Spark 消費 Hive 近 N 天數據寫入 ClickHouse。其優點是可解決方案一 Join 時問題。缺點是全量同步,僅保存近 N 天數據,壓力大。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"針對實時數據,也有兩種選擇方案。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"方案一"},{"type":"text","text":"是用 VersionedCollapsingMergeTree 引擎的增量同步方案:先用 Spark 計算引擎將 Mongo 存量數據一次性同步到 ClickHouse,再重置 Kafka 消費位置,將實時數據同步到 ClickHouse。其優點是即使有重複數據,也可使用變種 SQL 避免數據誤差。缺點是實時數據強依賴 OLTP 數據中臺 提供的 Kafka 消息(oriData、currData)準確性,且離線和實時數據連接點,存在無法摺疊現象。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"方案二"},{"type":"text","text":"是用 ReplacingMergeTree 引擎的增量同步方案:先用 Spark 計算引擎將 Mongo 存量數據一次性同步到 ClickHouse,再重置 Kafka 消費位置,將實時數據同步到ClickHouse ReplacingMergeTree。其優點是相比與 VersionedCollapsingMergeTree 更簡單,且離線和實時數據連接點,不存在異常。缺點是不保證沒有重複的數據出現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下來介紹下數據的準確性保證。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"數據準確性保證"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"離線數據的準確性保證方面,我們主要做了以下兩點。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先是離線重跑數據時,如果 ClickHouse 是 Merge 引擎,重跑時將 Drop 重跑分區。然後是離線全量重跑近 N 天數據,執行 Spark 任務前會先 Truncate 表。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而實時數據的數據準確性保證,首先是 在 Spark 消費 Kafka 時,offset不自動提交,待本次微批數據的所有業務邏輯均處理完成後,再手動提交 offset,以此達到最少一次消費的目的,保證不會丟數據,而 ClickHouse ReplacingMergeTree 引擎寫入是冪等的。然後針對 ClickHouse,每間隔 time 時間主動進行 Merge,考慮服務器壓力,只 Merge 最近 time * 2 時間段內修改的分區。目前 time 是 5 min。如下圖:"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b1\/b1c8cfc19d2e4e8142c6f175667ba1e2.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"自動Merge示意圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"到此針對實時數倉的架構細節已經基本講完了。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"配置化開發 "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然而,面對源源而來的報表需求,每個需求花費幾天去開發,不僅耗費人力,而且重複的工作也讓開發人員無法抽身。考慮到愛藝奇視頻生產都是結構化數據,這就爲配置化開發提供了可能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"整個過程主要用到了程序參數解析器 - Apache Commons CLI,一款開源的命令行解析工具。它可以幫助開發者快速構建啓動命令,並且幫助你組織命令的參數、以及輸出列表等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/13\/1330c4e8c2abad6b9f32d8eafe1ff85e.png","alt":"圖片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"參數解析器結構圖"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"價值與規劃"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"愛奇藝視頻生產實時數倉目前的建設方案完成後,我們基本實現了代碼 0 開發,原本報表開發週期由天級縮短到小時級。滿足頻繁更新的實時、離線可 Join 的報表需求。目前已支持 4 個離線報表任務,3 個實時報表任務,其中 1 個離線 Join 需求,1 個實時 Join 需求,後續可能更多。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"後續我們會在愛奇藝視頻生產平臺提供頁面化操作,將同步工具產品化,首先與 Hive、HBase、ClickHouse 等打通,自動建表,然後將任務創建、運行、監控狀態邏輯通過調度自動化 。通過技術創新去支持和落地新的業務場景,繼續推動愛奇藝的數據和產品向着實時化的方向發展。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文轉載自:愛奇藝技術產品團隊(ID:iQIYI-TP)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"原文鏈接:"},{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s\/YrYWfVxAFD-mLpdfYksaiQ","title":"xxx","type":null},"content":[{"type":"text","text":"支持頻繁更新、即席查詢:ClickHouse在愛奇藝視頻生產的應用"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章