Adobe基於Iceberg的數據湖性能提升實踐

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文介紹了Adobe公司在使用Iceberg時遇到的小文件問題以及高併發寫入的一致性問題。針對這兩個問題,Adobe給出了有指導意義的解決方案。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"本文最初發表於Adobe技術博客("},{"type":"link","attrs":{"href":"https:\/\/medium.com\/adobetech\/high-throughput-ingestion-with-iceberg-ccf7877a413f","title":"xxx","type":null},"content":[{"type":"text","text":"《High Throughput Ingestion with Iceberg》"}]},{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"),經InfoQ翻譯並分享。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"企業用戶通過使用"},{"type":"link","attrs":{"href":"https:\/\/www.adobe.com\/experience-platform.html","title":null,"type":null},"content":[{"type":"text","text":"Adobe Experience Platform"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"對企業數據進行彙集和標準化處理,可以獲得企業數據360度全方位的展示。我們之前發佈了一篇名爲《"},{"type":"link","attrs":{"href":"https:\/\/medium.com\/@jaeness\/88cf1950e866","title":null,"type":null},"content":[{"type":"text","text":"Iceberg at Adobe"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"》的博客,在這個博客中,介紹了我們在海量數據和一致性方面面臨的挑戰,以及系統向apache Iceberg遷移的需求。Adobe Experience Platform作爲一個開放可拓展的平臺,面對海量的數據,不僅可以支持低延時的"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"流式計算"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",也可以高效的對數據進行"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"批處理"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在使用Iceberg時,我們遇到了將小文件高頻地發送到Adobe Experience Platform進行批處理的場景。該場景在大數據行業中被稱爲“小文件問題”,這是一個常見的典型案例。我們在嘗試將這些小文件大規模地提交到Iceberg時,該問題很快就暴露出來,我們亟需一個解決方案。本文詳細介紹了磁盤緩衝寫入模式下,我們是如何通過關鍵的數據寫入方案來解決這個問題的。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"數據的流式處理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"數據的流式提取允許你將數據從客戶端和服務器端實時地發送到Experience Platform。Adobe Experience Platform支持將獲取的數據轉化爲流式數據,這些數據會持久化存儲在數據湖中支持流式傳輸的數據集中。在從數據源獲取數據時,可以對數據進行自動驗證的配置操作,以確保數據來源真實可靠。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"負責數據提取的關鍵組件是Siphon Stream。它使用了Apache "},{"type":"link","attrs":{"href":"https:\/\/spark.apache.org\/docs\/latest\/structured-streaming-programming-guide.html","title":null,"type":null},"content":[{"type":"text","text":"Spark Structured Streaming"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"流處理框架,該框架是一個分佈式可拓展的數據處理引擎,支持多種連接器以及對接多種類型的數據文件。鑑於Apache Spark有着豐富的消息處理語義,因此被Adobe Experience Platform選爲默認的數據處理引擎。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在生產環境下,一般架構下的數據流提取遇到了一些挑戰,我們在其他文章中詳細介紹了改進措施,具體點擊"},{"type":"link","attrs":{"href":"https:\/\/docs.google.com\/document\/d\/1k5m_q35LtXfBto7q0FfTzbHmqZ_pZeTsdUiCbCX5Yhw\/edit#heading=h.5mpa15l58hub","title":null,"type":null},"content":[{"type":"text","text":"這裏"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"可以閱讀了解。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"數據的批處理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"直接數據提取是批處理的一部分。批處理文件將逐批上傳到暫存區中,然後進行處理,最後提交到主存儲區中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/9b\/9bae2fe1d2d8ff9f0544f1d673148e20.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"圖一:直接數據提取架構"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"對於數據消費者而言,此過程的優點是數據的抽取速度更快,但是該方案的文件存儲方式存在着一些缺點,比如會引起消費者的一些使用問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"這些缺點主要體現在高吞吐量場景下會引發兩個我們熟知的技術問題:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"小文件問題"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"高併發寫入導致資源競爭"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"系統的吞吐量"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我們當前所討論的高吞吐量是以某個數據集每天的文件數來衡量的。我們通常會在時序數據庫中看到一些高頻小文件,這些文件記錄了點擊流和其他事件的數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"通過觀察時序數據,我們每天需要處理的數據量統計如下:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"總計文件抽取數量在"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"700,000"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"左右。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"總計數據抽取大小在"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"1.5 TB"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 左右。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"每個數據集支持的文件吞吐量約爲"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"100,000。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"每個數據集支持的吞吐量大小約爲 "},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"250 GB"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/0c\/0c8da0614ecd445ca38be223420c1ea6.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"圖二:每日數據吞吐量統計"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"小文件問題"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"小文件問題是分佈式存儲中已知的問題。對於HDFS,當存儲多個小於"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"塊大小"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"的文件時,就會出現此問題。 HDFS旨在處理以"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"大文件"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"形式存儲的大量數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"一般情況下,HDFS中的每個文件,目錄和塊都表示爲namenode機器內存中的一個對象,每個對象佔用150個字節。因此,如果有1000萬個小文件,每個文件佔用一個塊,那麼總共將佔用3GB內存。當前的硬件可以毫無壓力地支持這個級別的內存。當然,如果有10億個文件的話,硬件上可能就無法很好地進行支持了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"小文件進行讀取檢索時,通常會導致從一個數據節點到另外一個數據節點的大量查找和跳轉,這些都是效率十分低下的數據訪問模式。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Iceberg通過優化文件掃描並更加準確地定位需要加載的文件,實現讀取效率的提升。Iceberg讀取器會藉助元數據,對分區和列存儲數據進行修剪操作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"上面的解決方案在一定程度上會有所幫助,但是在諸如Adobe Experience Platform之類的高吞吐量數據抽取系統中,將會遇到在Iceberg表中"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"高併發寫入"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"的問題。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"高併發寫入問題"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Iceberg使用樂觀鎖來處理併發寫入問題。當出現數據版本衝突時,Iceberg會自動重試以確保數據更新成功。 在這種情況下,Iceberg可能會進行多次重試,以較低的時延高效地進行數據操作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Iceberg使用快照在併發寫入之間進行隔離,並且這些快照是在Iceberg表的每次提交(數據更新或更改)時進行創建。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Iceberg的初始測試報告顯示每個數據集每分鐘限制15次提交,而我們需要每個數據集每分鐘至少處理30次以上。 每次提交均從先前的快照開始,併產生一個新的快照。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"進行這種限制的原因是,Iceberg會一直掃描先前快照的元數據文件,並在寫入數據後生成新的快照。讀取以前的快照然後創建新的快照,這個過程十分耗時,並且耗時會隨着元數據增加而增加。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我們需要解決Iceberg的這一侷限性問題,並在不引入不可預測的數據延遲的情況下,滿足我們對數據提取機制的吞吐量期望。 "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"解決方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我們提出與Iceberg集成的策略是在Adobe Experience Platform中緩衝我們寫入的數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"緩衝寫入"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"是一種批處理模式,可以滿足我們的數據需求,因爲它可以解決我們在Adobe Experience Platform這樣的高吞吐量數據處理環境中出現的兩個主要問題:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"HDFS(Hadoop分佈式文件系統)中已知的小文件問題"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Iceberg表上的高併發寫入問題"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"該解決方案意味着存在一個單獨的服務,該服務提供緩衝點,負責確定何時以及如何打包數據,並將數據從該緩衝點遷移到數據湖。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"使用單獨的服務的有以下優點:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"優化寫入效率:減少寫入次數的同時,增加寫入的數據量。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"優化讀取效率:讀取器打開的文件數量進一步縮小。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"彈性伸縮:由於緩衝寫入使用單獨的按需作業,因此該服務可以根據業務需要進行彈性伸縮。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"架構介紹"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"下圖說明了在Adobe Experience Platform中,從生產者到消費者的數據提取的緩衝寫入解決方案。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/e8\/e8a6398044c47f926741a708b9df80d2.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"圖三: 緩衝寫入解決方案架構圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"該架構圖按照數據處理階段可以簡要概括爲三個方面:"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"生產者"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":在數據處理的第一階段,客戶端或外部解決方案生成的數據可以是事件、CRM數據或維度數據,通常這些數據都以文件形式存在。它們將會通過批量提取服務(Bulk Ingest)進行上傳或推送。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"數據處理平臺"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":這是數據處理的第二階段,在這裏數據會被消費,並以Parquet文件寫入到緩衝區中,在緩衝區,數據會停留一段時間。此緩衝區是數據提取過程的第一站。此時,Flux組件將被通知新數據已就緒,並對數據讀取以進行合併。根據指定的規則和條件,Flux會處理一定數量和(或)一定時間的緩衝數據,然後將數據高效地寫入用戶可以訪問的存儲當中。數據跟蹤器(Data Tracker)會依次負責生成高級元數據和指標數據。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"消費者"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":在數據處理的最後階段,客戶可以使用Adobe Experience Platform SDK和API高效提取聚合後的數據,以訓練他們的機器學習模型,或者運行SQL查詢,繪製儀表看板和生成報告,豐富統一配置管理數據,對用戶羣體進行畫像和細分等等 。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"數據平臺包括以下組件:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"Bulk Ingest"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":批量提取模塊,負責提取數據並將其保存到緩衝存儲中"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"Data Lake Buffer Zone"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":數據湖緩衝區,緩衝數據的存儲位置"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"Pipeline Services"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":數據管道服務,即Adobe Experience Platform的Kafka中間件,主要用於狀態管理"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"Flux"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":Spark狀態流應用程序,包含緩衝數據的邏輯處理"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"Compute"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":基於Azure Databricks構建的Adobe Experience Platform計算服務,該服務提供了對Apache Spark的支持,可以用於執行短生命週期的作業任務以及長期運行的應用程序"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"Consolidation Worker"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":數據合併計算單元,使用Iceberg語義處理後將緩衝區存儲中的數據分類合併,然後將其寫入主存儲中"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"Data Lake Main Storage"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":數據湖主存儲,可對用戶提供數據存儲和讀取服務"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"Data Tracker"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":數據跟蹤器模塊,負責對高級元數據進行管理"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"Catalog"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":爲用戶提供高級元數據服務"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"數據流"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"數據平臺收集到的文件數據大小不一,並且可能屬於不同的數據分區。這些文件在到達數據處理平臺時會進行緩衝,然後經過重新組合以寫入最少數量的文件,同時生成最少數量的提交。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"爲了瞭解其工作原理,接下來逐一介紹兩種場景:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"無緩衝寫入"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"和"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"緩衝後寫入"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"如果不使用任何緩衝寫入操作,那麼任何新到達的文件都會被立即提取,不會進行任何優化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/f8\/f865d2ed67b3ac0575ddcab5904741fa.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"圖四:沒有使用緩衝寫入方案的數據流示意圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"從圖中可以看出,15個小文件意味着15次的數據提交。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"使用緩衝寫入方案升級後,數據會一直等待適當的時間來被提取和優化,以使文件和提交的數量最少。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/29\/29e3112ae92ebff00b99a4ae1a909a40.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"圖五:使用緩衝寫入方案的數據流示意圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"從上圖可以看出,使用緩衝寫入方案,同樣的15個小文件,最終合併成了4個文件,並且最後只需要4次提交操作。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"數據合併計算單元"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"實現緩衝寫入的主要組件是“數據合併計算單元(Consolidation Worker)”。這是一個生命週期很短的過程,當緩存了足夠的數據並且必須將其寫入主存儲時會觸發該過程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"該計算單元以租戶形式工作,並會嘗試優化分區內的數據,但功能不僅僅侷限於此,它還可以處理跨多個分區的文件數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Iceberg社區幫助我們克服了使用Consolidation Worker的兩個障礙。首先,通過使用Apache Iceberg中已經存在的Write Audit Publish Flow(WAP)功能,在提交數據時會提供準確一次性語義保證。其次,它降低了由於並行覆寫version-hint.txt文件而導致的提交錯誤。對於這些感興趣的讀者,可以閱讀一下文章結尾列出的資源鏈接。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"寫入審覈發佈流程 (WAP)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Iceberg具有“寫入審覈發佈”的功能,該功能爲數據存儲提供分段提交支持,並保證提交後數據可用。WAP功能依賴於外部給定的ID(wapID),之後可以通過該ID檢索到對應的分段提交。最重要的一個作用是它可以確保分段提交的唯一性——確保兩段提交不會存在相同的ID。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/41\/4106ce5b3247e9a10a5745e521d6ae5b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"圖六:數據合併計算中的寫入審覈發佈流程"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"WAP工作流程是在Consolidation Worker中實現的:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"通過提供的ID檢查數據是否已經存在於Iceberg中,如果是,則只需更新高級元數據"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"檢查數據是否是作爲單獨的提交緩存,如果是,則將其寫入到表中以供用戶使用"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"如果以上兩種情況均不存在,那麼就加載並使用WAP功能寫入數據:暫存具有特定ID的數據,隨後對數據進行選擇寫入"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"最後,更新高級元數據存儲。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Iceberg的version-hint.txt文件改進"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Iceberg存在高吞吐量併發寫入的問題是因爲要確保當前正在使用的表是最新版本。在某些情況下,由於已知的非原子性操作,比如重命名或者移動文件操作,會導致提供Iceberg SDK當前版本的元文件(version-hint.txt)丟失。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"當Iceberg提交新版本時,它將通過調用HDFS CREATE(overwrite = true)API,用新的版本值(當前版本值的增量)替換version-hint.txt的當前內容。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"而ADLS(Azure DataLake Store)選擇將此API實現爲DELETE + CREATE操作。因此會存在這樣一種可能, 在特定的時間內進行提交操作時,version-hint.txt文件可能會不存在。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"由於Iceberg讀取器和寫入器均依賴於版本提示文件version-hint,在解析並選擇要加載適當的元數據版本文件時會引起數據不一致問題。 "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/d0\/d0e4e127b4fdc45e85ba7df4972c92fe.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"圖七: Iceberg默認的版本文件實現方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"該問題的解決方案依賴於ADLS文件系統API的一致性保證來持久化表版本,比如CREATE(overwrite = false)操作的原子性和列表目錄的讀寫一致性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"因此,實現方案是使用目錄文件來持久化保存版本信息,因此每個寫入都將使用CREATE(overwrite = false)創建新的文件來標識新版本信息,而讀取版本信息時必須列出版本文件目錄,並選擇此時目錄中版本最高的那個值。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/ea\/eabb411b184fd4dccb3d301df91e52b9.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"圖八:Iceberg版本文件實現改進方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"該改進是Adobe解決一致性問題所採用的方法,我們承認Iceberg Community可能會以不同的方法來解決這個問題。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"結論"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"藉助緩衝寫入解決方案,系統可以遠遠以超過我們制定的目標吞吐量的負載正常運行。我們進行了一項基準測試,每小時將大約20萬個小文件提取到單個Iceberg表中,最終測試非常成功。此外,我們還可以支持水平擴展以適應未來海量數據處理的需求。唯一的限制就是計算設置的有限的可用資源,這是一個非常值得研究的問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在讀取數據方面,由於平臺已針對讀取進行了優化,這完全可以滿足我們的業務需求。當讀取數據時,Iceberg本身會帶來很多好處,例如矢量化讀取,元數據過濾和數據過濾。通過藉助其他工具來進一步的優化,還會帶來運行時優勢。我們將在下一個博客《Taking Query Optimizations to the Next Level with Iceberg》中進一步討論這些好處。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"參考資料"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/www.adobe.com\/experience-platform.html","title":null,"type":null},"content":[{"type":"text","text":"Adobe Experience Platform"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/docs.adobe.com\/content\/help\/en\/experience-platform\/ingestion\/home.html","title":null,"type":null},"content":[{"type":"text","text":"Adobe Experience Platform Data Ingestion Help"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/spark.apache.org\/docs\/latest\/structured-streaming-programming-guide.html","title":null,"type":null},"content":[{"type":"text","text":"Spark Structured Streaming"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/medium.com\/adobetech\/redesigning-siphon-stream-for-streamlined-data-processing-eccbcb2f3038","title":null,"type":null},"content":[{"type":"text","text":"Redesigning Siphon Stream for Streamlined Data Processing (Part 1)"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/medium.com\/adobetech\/part-2-redesigning-siphon-stream-for-streamlined-data-processing-in-adobe-experience-platform-1eca98fc85ad","title":null,"type":null},"content":[{"type":"text","text":"Redesigning Siphon Stream for Streamlined Data Processing in Adobe Experience Platform (Part 2)"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":6,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/medium.com\/adobetech\/creating-the-adobe-experience-platform-pipeline-with-kafka-4f1057a11ef","title":null,"type":null},"content":[{"type":"text","text":"Creating Adobe Experience Platform Pipeline with Kafka"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":7,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"["},{"type":"link","attrs":{"href":"https:\/\/www.mail-archive.com\/[email protected]\/msg00738.html","title":null,"type":null},"content":[{"type":"text","text":"DISCUSS] Write-audit-publish support"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":8,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/github.com\/apache\/iceberg\/issues\/1496","title":null,"type":null},"content":[{"type":"text","text":"Update version-hint.txt atomically (Github ticket discussion)"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":9,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/github.com\/apache\/iceberg\/pull\/1465","title":null,"type":null},"content":[{"type":"text","text":"Core: Enhance version-hint.txt recovery with file listing (Github Pull Request)"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"英文原文鏈接:"},{"type":"link","attrs":{"href":"https:\/\/medium.com\/adobetech\/high-throughput-ingestion-with-iceberg-ccf7877a413f","title":null,"type":null},"content":[{"type":"text","text":"High Throughput Ingestion with Iceberg"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章