推薦系統大規模特徵工程與FEDB的Spark基於LLVM優化

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"今天給大家分享第四範式在推薦系統大規模特徵工程與Spark基於LLVM優化方面的實踐,主要包括以下四個主題。"}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大規模推薦系統特徵工程介紹"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SparkSQL與FESQL架構設計"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於LLVM的Spark性能優化"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"推薦系統與Spark優化總結"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"大規模推薦系統特徵工程介紹"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"推薦系統在新聞推薦、搜索引擎、廣告投放以及最新很火的短視頻App中都有非常廣闊的應用,可以說絕大部分互聯網企業和傳統企業都可以通過推薦系統來提升業務價值。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們對常見的推薦系統架構進行分層,離線層(Offline layer)主要負責處理存在HDFS的大規模數據進行預處理和特徵抽取,然後使用主流的機器學習訓練框架進行模型訓練並且導出模型,模型可以提供給在線服務使用。流式層(Stream layer)我們也稱近線層,是介於離線和在線的中間層,可使用流式計算框架如Flink進行近實時的特徵計算和生成,結果保存在NoSQL或者關係型數據庫中給在線服務使用。在線層(Online layer)就包括了與用戶交互的UI以及在線服務,通過實時的方式提取流式特徵和使用離線模型進行預估,實現推薦系統在線的召回和推薦功能,預估結構和用戶反饋也可以通過事件分發器寫會流失計算的隊列以及離線的Hadoop存儲中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/24/24bb56768dd6c5db3a5527c0af2905cd.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本次分享重點會介紹離線層的優化,在大規模的推薦系統中離線存儲的數據可能到PB級,常用的數據處理有ETL(Extract, Transform, Load)和FE(Feature extraction ), 而編程工具主要是SQL和Python, 爲了能夠處理大規模數據一般使用Hadoop、Spark、Flink這樣的分佈式計算框架,其中Spark因爲同時支持SQL和Python接口在業界使用最廣。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"SparkSQL與FESQL架構設計"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Spark剛發佈了3.0版本,無論是性能還是易用性都有很大的提升,其中相比於Hadoop MapReduce就有 一百倍以上的性能加速,能夠處理PB級數據量,支持水平拓展的分佈式計算和自動Failover,支持易用的SQL、Python、R以及 流式編程(SparkStreaming)、機器學習(MLlib)和圖計算(GraphX)接口,對於推薦系統來說內置的推薦算法模型也可以做到開箱即用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/10/109ffa6708764c0d2afe254a6b248661.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"業界使用Spark作爲推薦系統離線數據處理框架的場景非常多,例如使用Spark來加載分佈式數據集,使用Spark UDF和SQL來做做數據預處理和特徵選擇,使用MLlib來訓練召回、排序模型。但是,在上線部分Spark就支持不了了。主要原因是Spark沒有對long running service 的支持,而driver-executor的架構 只 適合做離線的批處理計算,在Spark 3.0中推出了Hydrogen可以支持一些預先運行的task但也只適用於 離 線計算或者模型計算階段 ,對於實時性要求更高但在線服務支持不好。Spark RDD編程接口也是 適合於 迭代計算,我們總結下Spark的優勢主要是能批量處理大規模數據,而且支持標準的SQL語法,劣勢是沒有在線預估服務的支持,因此也不能保證離線和在線服務的一致性,對於AI場景的特徵計算也沒有特別多的優化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/61/6139dab112474640ef20e64875d7fd35.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第四範式自研的FESQL服務,是在SparkSQL的基礎上,提供了針對AI場景特徵抽取計算的性能優化,還從根本上解決了離線在線一致性的問題。傳統的AI落地場景是先在離線環境通過機器學習訓練框架建模導出AI模型文件,然後由業務開發者來搭建在線服務,由於離線使用了SQL、Python進行了數據預處理和特徵抽取等功能,在線需要開發一套與之匹配的在線處理框架,兩套不同的計算系統在功能上容易出現離線在線不一致的情況,甚至離線建模時就可能使用穿越特徵導致在線部分無法實現。而FESQL的方案是使用統一的SQL語言,除了標準的SQL支持外還拓展了針對AI場景的計算語法以及UDF定義,離線和在線使用同一套高性能LLVM JIT代碼生成,保證了無論是離線還是在線都執行相同的計算邏輯,從而保證機器學習中離線和在線的特徵一致性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/41/41f0e701204e77daa680c3de4fa2476b.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了支持SparkSQL中無法支持的在線功能,FESQL在線部分實現一個自研的高性能全內存時序數據庫,相比於其他通用的key-value內存數據庫如Redis、VoltDB,在時序特徵的存儲上讀寫性能以及壓縮容量都有很大的提升,並且比傳統的時序數據庫如OpenTSDB能夠更好地滿足在線服務超低延時的需求。而離線部分仍然藉助Spark的分佈式任務調度功能,只是在SQL解析和執行上使用了更高效的native執行引擎,通過C++實現的LLVM JIT代碼生成技術,可以針對morden CPU使用更多intrinsic function實現向量化等指令集優化,甚至是特有硬件如FPGA、GPU等加速。通過同一套SQL執行引擎等優化,不僅提升了離線和在線的執行效率,還能從功能上保證離線建模的特徵抽取方案遷移到在線服務而不需要額外的開發和比對工作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/65/65f02ffef9b983f7d1a9ce6d41e1e470.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FESQL性能上對比同樣是全內存的商業產品memsql,在針對機器學習的時序特徵抽取場景中,同一個SQL在性能上相比memsql也有巨大的提升。"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/fa/fae6041b38cf2d4dff544c0aeb94f0e3.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"基於LLVM的Spark性能優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從Spark 2.0開始,開始使用了Catalyst和Tungsten項目對Spark以及SQL任務有了很大的性能優化。Catalyst通過對SQL語法進行詞法解析、語法解析,生成了unresolved的抽象語法樹數據結構,並且對抽象語法樹進行了數十次的優化pass,生成的最終物理計劃可以比普通SQL解析後直接執行快數十倍。Tungsten項目則是通過用Java unsafe接口實現了內部數據結構的堆外管理,在很大程度上降低了JVM GC的overhead,並且對多個物理節點、多個表達式可以實現whole stage codegen,直接生成Java bytecode並使用Janino內存編譯器進行編譯優化,生成的代碼避免過多虛函數調用、提高CPU cache命中率,性能上比傳統的火山模型解釋執行快幾倍並且非常接近由高級程序員手寫Java代碼的性能了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那麼Spark的Catalyst和Tungsten是否已經足夠完美呢?我們認爲還不夠,首先Spak是基於Scala、Java實現的,就是是PySpark也是通過socket與JVM相連調用Java函數,因此所有代碼都是在JVM上執行,這樣不可避免就要接受JVM和GC的overhead,而且隨着CPU硬件和指令集的更新要通過JVM來使用新硬件特性還是比較困難的,更不用說越來越流行的FPGA和GPU,對於高性能的執行引擎使用更底層的C或C++實現可以代碼更好的性能提升。對於可並行的數據計算任務,使用循環展開等優化手段可以成倍地提升性能,對於連續的內存數據結構還可以做更多向量化優化以及利用上GPU數千個計算核並行優化,這些在目前最新的Spark 3.0開源版中仍不支持。而且在機器學習場景中常用SQL的window函數來計算時序特徵,這部分功能對應Spark的物理節點WindowExec居然沒有實現whole stage codegen,也就是說在做多表達式的劃窗計算時無法使用Tungsten的優化,通過解釋執行來計算每個特徵,這樣性能甚至比用戶自己寫的Java程序代碼慢上不少。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/77/77e82f9431c9b6ec2925cde41ea88b27.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了解決Spark的性能問題,我們基於LLVM實現了Spark的native執行引擎,同時兼容了SparkSQL接口,相比與Spark會生成邏輯節點、生成Jave bytecode,以及基於JVM運行在物理機上,FESQL執行引擎也會解析SQL生成邏輯計劃,然後通過JIT技術直接生成平臺相關的機器碼來執行,從架構上比Spark少了JVM虛擬機層的開銷,性能也會有更大的提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/5b/5ba11bba24156325b658a0d4a8093af2.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"LLVM是目前非常流行的編譯系統工具鏈,其中項目就包括了非常著名的Clang、LLDB等,而機器學習領域TensorFlow主推的MLIR以及TVM都使用了LLVM的技術,它可以理解爲生成編譯器的工具,目前很流行的Ada、C、C++、D、Delphi、Fortran、Haskell、Julia、Objective-C、Rust、Swift等編程語言都提供了基於LLVM實現的編譯器。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"JIT則是與AOT的概念相對應,AOT(Ahead-Of-Time)表示編譯是在程序運行前執行,也就是說我們常編寫的C、Java代碼都是先編譯成binary或者bytecode後運行的,這就屬於AOT compiling。JIT(Just-In-Time)則表示運行時進行編譯優化,現在非常多的解釋型語言如Python、PHP都有應用JIT技術,對於運行頻率非常高的hot code使用JIT技術編譯成平臺優化的native binary,這種動態生成和編譯代碼技術也稱爲JIT compiling。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"LLVM提供了高質量、模塊化的編譯、鏈接工具鏈,可以很容易實現一個AOT編譯器,也可以集成到C++項目中實現自定義函數的JIT,下面就是實現一個簡單add函數的例子,相比於直接用C來編寫函數實現,JIT需要在代碼中定義函數頭、函數參數、返回值等數據結構,最終由LLVM JIT模塊來生成平臺相關的符號表和可執行文件格式。由於LLVM內置了海量的編譯優化pass,自己實現的JIT編譯器並不會比GCC或者Clang差很多,JIT可用於生成各種各樣的UDF(User-Defined functions)和UDAF(User-Defined Aggregation Functions),而且LLVM支持多種backend,除了常見的x86、ARM等體系架構還可以使用PTX backend生成運行在GPU的CUDA代碼,LLVM還提供底層的intrinsic functions接口讓程序可以用上現代的CPU指令集,性能與手寫C甚至是手寫assembly相當。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/5b/5b6da2d3194c804b04a177dae4950178.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在2020年Spark + AI Summit上,Databrick不僅release了Spark 3.0,還提到了內部的閉源項目Photon,作爲Spark的native執行引擎可以加速SparkSQL等執行效率。Photon同樣使用C++實現,從Databrick的實驗數據可以看出,C++實現的字符串處理等表達式可以比Java實現的性能高出數倍,而且還有更多vectorized指令集支持。整體設計方案與FESQL非常類似,但Photon作爲閉源項目目前只能在Databrick商業平臺上使用,目前還在實驗階段需要聯繫客服手動開啓,由於也沒有更多實現細節公佈因此不能確定Photon是否基於LLVM JIT實現,暫時官方也沒有介紹有PTX或者CUDA的支持。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/0b/0ba6b4075f818816fc8aae84beb07577.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/f4/f4639c6c6d86303abb204f83c38de230.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在表達式優化方面,主流和Limit、Where合併、Constant folding以及Filter、Cast、Upper、Lower簡化都可以通過optimizationpass來優化,生成最簡潔的表達式計算從而大大減少CPU執行指令數,相關的SQL優化也就不一一贅述了,但只有經過邏輯節點優化、表達式優化、指令集優化、代碼生成後纔可能達到近於頂級程序員手寫代碼的性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/15/150e18f8e3c2638ba424ad83674f626f.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在機器學習常用的時序特徵抽取測試場景中,同一個SQL語句和測試數據,基於相同版本的Spark調度引擎,使用FESQL的native執行引擎性在單window下性能提升接近2倍,在更復雜的多window場景由於CPU計算更加密集性能可提升接近6倍,多線程下結果類似。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/17/17d82d19767780ae7bf76859c0538d25.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從結果上看,使用LLVM JIT的性能提升非常明顯,使用相同的代碼和SQL,不會修改任何一行代碼只要替換SPARK_HOME下的執行引擎實現,就可以實現接近6倍甚至更大的性能提升。我們從生成的計算圖以及火焰圖找到性能提升的原因,首先在Spark UI上可以看到,在SparkSQL中window節點是沒有實現whole stage codegen的,因此這部分是Scala代碼的解釋執行,而且SparkSQL的物理計劃很長,每個節點間unsafe row的檢查和生成都有一定的開銷,而對比FESQL只有兩個節點,讀數據後直接執行LLVM JIT的binary代碼,節點間的overhead減少很多。從火焰圖分析,底層都是Spark調度器的runTask函數,SparkSQL在進行滑窗計算聚合特徵時採樣數和耗時都比較長,而FESQL是native執行,基本的min、max、sum、avg在編譯器優化後CPU執行時間更短,左側雖然有unsafe row的編解碼時間但佔比不大,整體時間比SparkSQL都少了很多。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/77/77719fc561fe9f0a2deda5e1532d1bb4.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FESQL是目前少有的比開源Spark 3.0性能還更快數倍的native執行引擎,可以支持標準SQL以及集成到Spark中,與Photon僅能在Databrick內部使用不同,我們未來會發布集成LLVM JIT優化的LLVM-enabled Spark Distribution,不需要修改任何一行代碼只要指定SPARK_HOME就可以得到極大的性能加速,也可以兼容目前已有的Spark應用,更多FESQL使用案例請關注Github項目https://github.com/4paradigm/SparkSQLWithFeDB 。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"推薦系統與Spark優化總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最後總結我們在推薦系統與Spark優化的工作,首先大規模推薦系統必須依賴能夠處理大數據計算的框架,例如Spark、Flink、ES(Elastic Search)以及FESQL等,Spark是目前最流行的大數據離線處理框架,但目前只適用於離線批處理,不能支持上線。FESQL是我們自研的SQL執行引擎,通過集成內部時序數據庫可以實現SQL一鍵上線並且保證離線在線一致性,而內部通過LLVM JIT可以優化SQL執行性能比開源版Spark 3.0性能還能提升數倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/03/030955e126be0a4d73656732abf66ff4.png","alt":"在這裏插入圖片描述","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"更多FESQL使用案例請關注Github項目 https://github.com/4paradigm/SparkSQLWithFeDB 。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章