突破 PyTorch、TensorFlow 並行瓶頸的開源訓練加速框架到底是啥?

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"隨着摩爾定律的失效,單個計算單元的能力已經遠遠無法滿足數據的指數級增長。比如,快手每天上傳的新視頻超過千萬條,即便訓練簡單的分類模型(比如 ResNet),使用單機單卡的算力,訓練快手日內新增視頻都需要超過一百天的時間。因此,在數據爆炸性增長的互聯網行業,多機多卡的並行訓練成爲了大數據時代的必然。隨着深度學習模型功能的日益強大,分佈式訓練任務的通信成本和所需算力也隨之急劇增長。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"embedcomp","attrs":{"type":"video","data":{"id":"421475","name":"0922 大咖說直播 回放 壓縮","poster":"https:\/\/static001.infoq.cn\/resource\/image\/0b\/0c\/0b4982f37925162yyb2040166eb53a0c.jpeg","url":"https:\/\/media001.geekbang.org\/089d7375928e474c835033147fc2cda5\/7f549ea2e8f14c6184e407834640d169-3ab262b1391dee1e336928c1c6616860-sd.m3u8"}}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 然而,由於多機多卡並行帶來的額外通訊成本,加速比(speedup)經常讓大家失望,從而形成了大廠“堆資源”,沒資源的“乾瞪眼”的局面。比如,Google 的 Downpour 框架 [1] 使用 80 個 GPU 訓練 ImageNet,加速比卻只有 12\/80=15%。因此如何提升多機多卡中訓練的通訊效率成爲了並行訓練乃至解決數據爆炸性增長的核心問題之一。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"項目 GitHub 地址:"},{"type":"link","attrs":{"href":"https:\/\/github.com\/BaguaSys\/bagua","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/BaguaSys\/bagua"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"現有的深度學習開源框架(PyTorch,TensorFlow)主要針對系統層面優化,把已有的單機單卡優化算法擴展到多機多卡的場景。雖然系統層面的優化使得並行效率不斷提升,但是邊際效益卻越來越明顯。針對這個問題,快手和蘇黎世理工(ETH Zürich)聯合開發了一款名爲“Bagua”(八卦)的訓練加速框架。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"近日,快手 Senior Staff Research Scientist 廉相如現身"},{"type":"link","attrs":{"href":"https:\/\/www.infoq.cn\/video\/Mv2PX7Xhb1ENJj4DIYyQ?utm_source=home_video&utm_medium=article","title":null,"type":null},"content":[{"type":"text","text":"大咖說"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",與我們分享"},{"type":"link","attrs":{"href":"https:\/\/www.infoq.cn\/article\/BQwk3Vdvm3Tlcz7BLCrq","title":null,"type":null},"content":[{"type":"text","text":"Bagua"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(八卦)的核心技術思路。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:方便先跟我們簡單介紹一下快手研發這款框架的背景嗎?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Bagua(八卦)是快手最新開源的訓練加速框架,之所以做這個框架主要源於快手內部的實際業務需求。隨着摩爾定律的失效,單個計算單元的能力已經遠遠無法滿足數據的指數級增長,快手內部每天上傳的新視頻超過千萬量級,如果是用簡單的分類模型,比如ResNet,在單機單卡的情況下需要超過一百天的時間完成訓練,這肯定是遠遠不能滿足業務迭代的需求。企業使用GPU這種算力更高的硬件替代CPU進行訓練任務,已經是業界共識,但是單個GPU仍然遠遠不能滿足大規模數據訓練的需要,使用多機多卡並行訓練成爲必然趨勢。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"但是,多機多卡並行涉及GPU和GPU之間的協調通訊,會帶來額外的通訊成本,整體的加速比不太樂觀,大廠可以通過堆資源的方式完成這件事情,小廠只能乾瞪眼,比如谷歌當年的Downpour 框架使用 80 個 GPU 訓練 ImageNet,加速比卻只有 12\/80=15%,大量資源被浪費,提升多機多卡的訓練效率成爲數據爆炸性增長時代的重要議題之一。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"近年來,業內關於這個問題也有了一些可行的解決方案,比如PyTorch-DDP、Horovod等,這些框架普遍僅針對系統層面進行優化,比如對 NVIDIA NCCL 的 GPU 通訊庫進行包裝,加上通信計算重疊的簡單優化。這樣的實現和使用都相對簡單,但在性能上還存在較大提升空間。快手結合內部業務對訓練速度的需求研發了Bagua,以通用訓練加速爲目標,不僅對多機多卡通訊進行加速,在效率上超過現存的分佈式訓練方案,還可以提升單卡性能、加速數據流讀取和模型並行效率,從而實現高收益的一站式訓練加速。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:我們從研發伊始就決定好開源了嗎?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"研發伊始,我們聚焦在快手內部業務的痛點問題上,主要是希望解決我們內部遇到的問題,後面發現這些問題是非常共性的,無論是快手這樣的互聯網公司還是小型創業公司都對訓練加速有着極高需求,但是這一方面在社區裏面相對來說是比較稀缺的,包括很多最新出現的訓練算法在工業界一直沒有很好的實現,我們將這些算法實現到了Bagua中,企業可以通過實際應用感受到效果,一方面可以幫助工業界更方便的使用學術界的最新算法,另一方面可以幫助對訓練有需求的公司和研究機構更好的訓練模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:自今年7月份開源至今,我們有收到一些反饋嗎,整個部署和使用成本是怎麼樣的?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"開源之後,我們收到了很多反饋,有些來自於公司內部,有些來自於開源社區。我聽到比較多的一個問題是:Bagua內部實現了非常多算法和優化是否會導致對系統依賴過多。我們也非常理解大家的這種擔憂,因爲依賴過多會降低易用性,所以我們在這方面做了不少努力,我們提供已預裝Bagua的鏡像,用戶可以直接拿來運行,也會逐步在各大公有云上提供相關服務,方便用戶運行。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"另外也提供一鍵式的安裝腳本,當前主流的linux操作系統可以直接運行安裝腳本。當然,這些是遠遠不夠的,易用性對一個開源項目而言是至關重要的,我們會在這方面持續進行優化,也歡迎社區裏面的小夥伴多提意見。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:現有分佈式訓練框架的主要瓶頸卡在哪?如果不出現新的框架,現有的問題是如何解決的?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"這是很好的問題,不同的業務或者機器學習場景是不太一樣的,最常見的是上述提到的多機多卡通訊瓶頸,導致每個GPU完成計算之後需要等待通訊完成再開啓下一個計算週期,造成計算資源的不充分利用,這種情況要麼放棄原有的訓練模型選擇通訊量較小的模型結構,或者使用更多資源。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:這類框架是否適用於不同體量、不同數據量的企業?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"沒錯,大家對速度和性能的要求是無止境的,無論公司大小,深度學習的訓練過程往往需要比較大的開銷,無論模型多大,只要是一個實用的模型都需要消耗GPU等資源,降低這部分花銷,整體的投入產出比就會有明顯提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:訓練的複雜性與模型質量和規模之間的關係可以具體說說嗎?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"其實這個問題比較好解決。在深度學習領域,基本共識是數據量越大,最後的模型效果就會越好,但是數據量太大會導致訓練時間過長,這就需要將二者平衡在我們可接受的範圍內。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"此外,模型結構更加複雜,使用了更多最新研究出來的模型結構,模型最後的訓練效果也會更好,這種模型在數據量相同的情況下往往訓練時間也會更長,需要消耗更多的計算資源,對系統的效率提出更大的挑戰。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:根據之前公開的信息,“八卦”這個名字的由來與Bagua 中實現的通訊算法是呼應的,您方便聊一下Bagua 在通訊算法方面做的優化嗎?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"說到Bagua這個名字還是很有意思的,一開始這只是快手內部使用的框架並沒有很正式的名字,在決定對外開源時才決定認真起一個名字。Bagua內部實現的最新算法,比如去中心化算法、異步算法、梯度壓縮算法很像八卦新聞,爲什麼這麼說呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"從單機單卡的訓練到多機多卡,每個卡都會把自己的計算結果進行累加傳播,這個過程就好像每個人將自己知道的信息傳遞給別人,又從其他人那裏獲取信息,最後完成全局的信息同步。如果將GPU之間的信息同步類比成人與人之間的信息同步,社會經驗告訴我們,八卦新聞或者小道消息是最高效的方式,因爲傳播速度特別快,這與Bagua框架中的算法一一呼應。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"以去中心化的通訊爲例,八卦消息的傳播一般來講都不是中心化的,大家往往是和自己熟悉的人交換信息,這就是一種去中心化的模式,對應到Bagua本身,該框架支持這種去中心化的分佈式訓練算法,每個GPU只和自己鄰近的GPU進行通訊,相比中心化的通訊,這種模式可以有效降低通訊代價,完成同樣的收斂效率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"再說異步通訊,八卦消息的傳播往往都是異步的,並不是所有人同時進行,對應到Bagua,這種通訊模式就是GPU之間的異步通訊,這裏的計算和通訊不要求同步,每個GPU不需要等其他所有GPU都有消息了才做同步,只要計算完成就可以與其他GPU進行通訊,通訊完成繼續執行計算,這可以省去所有GPU同步的開銷。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"最後是信息壓縮,八卦消息往往會有一些熱搜關鍵詞,言簡意賅的將最重要的信息傳遞出去,而不會顯示太多細節,否則大家也記不住,雖然總的信息傳播量很少,但溝通效率很高。對應到Bagua,我們支持對GPU計算出的梯度進行信息壓縮,留下最重要的部分,極大降低通訊開銷。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:爲了實現極致化的性能,Bagua主要做了哪些工作?其在技術上的難點是什麼?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"性能層面的提升除了上述提到的諸多最新通訊算法的加入,我們對系統本身也做了很多優化,這些優化讓Bagua在去掉這些通訊算法加成的情況下依舊可以獲得不錯的性能提升,原因在於我們有很多純系統實現層面的優化,包括底層的通訊協議,目前大家使用的深度學習訓練GPU通常由 NVIDIA 生產,NVIDIA 的GPU 提供通訊底層庫NCCL,我們對NCCL本身的通訊性能也做了優化,獲得了不錯的性能提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"起初,我們主要只做多機分佈式通訊的優化,像是讀取數據並不在我們的優化範圍內,但在快手內部業務落地的過程中,我們發現很多場景的性能瓶頸並不是因爲多機通訊而是卡在了數據讀取上,很多數據讀不進來或者說數據讀取之後做了很多預處理,這些佔據了大部分的訓練時間,因此Bagua對這部分進行了優化,可以自動完成數據讀取,預處理完的結果可以做緩存以加速下一次讀取和預處理的速度,這在內部很多場景效果顯著。這也啓發了我們將 Bagua 做成一個一站式的訓練加速工具,涵蓋訓練流程的各個階段。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"比如,我們還會做模型更新的梯度融合,舉例來說,一般框架中針對模型參數更新採取的方式是逐一更新,Bagua是將很多組參數組合在一起更新,極大提升整體效率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"技術上,比如在 Bagua 立項開始就進行探索的通訊算法方面,難點在於學術界對通訊算法的優化早有研究,但大多停留在理論層面,工業界一直沒有很好的落地,我們需要找到這些算法中好用的、能用的,結合硬件特性做高性能實現,真正發揮出算法的潛力。這需要對新的通訊算法有深刻的理解,同時有豐富的系統實現和工業落地經驗。像是底層通訊實現上,我們花了不少心思,在 NVIDIA 的 GPU 上我們優化後的通訊效率超過了 NVIDIA 自己實現的 NCCL 通訊庫。 其中一個優化點的基本思想是常用的分佈式訓練算法特點是每個GPU的收發數據量一樣,而且要完成操作必須所有GPU都完成通訊,如果GPU彼此之間的收發速度不一致,就會造成“木桶定律”,整體的完成速度取決於最慢的那一個,我們在通訊上實現了一個相對平衡的操作,我們會控制GPU的收發速率,儘量讓其保持同步,這樣就可以提高整體的通訊效率。這需要對算法本身具有充分的瞭解。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:預處理緩存用的是SSD還是GPU的DDR?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我們目前在開源版本里支持的是分佈式KV存儲,主要在內存裏面,比如每個機器存一部分緩存,我們正在打算將本地SSD,完成複合的緩存方案也對外開源,這可以解決的問題是當數據集特別大,所有機器內存加起來都不夠緩存數據集時可以在內存中緩存一部分,其他放在SSD上面,這可以支持更大的數據集。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:在做內存結合SSD加速方案時,SSD是一般的SSD還是類似英特爾傲騰的SSD或其他考量?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"英特爾傲騰SSD使用起來跟內存沒有區別,我們本身也支持這種方式,它是從硬件層面就可以當成內存用,這方面不需要Bagua來進行特殊支持。相反,普通SSD是需要Bagua單獨支持的,我們近期也會做這方面的開源,當然我們不會只用SSD,我們會跟內存結合起來,除非個別大規模場景,其餘需要運行分佈式訓練的場景,多機內存往往是夠用的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:方便具體聊聊Bagua與 PyTorch-DDP,Horovod,BytePS 實現思路上的異同嗎?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"對於當前比較流行的PyTorch-DDP,Horovod,BytePS等框架,我們首先覆蓋了這些框架主要的已有功能(分佈式同步訓練),在此基礎上通過算法和系統的聯合優化,達成更進一步的一站式訓練速度優化,不僅限於分佈式訓練。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:方便舉一個具體的例子介紹下Bagua在具體的業務場景中的使用情況,包括部署門檻、使用前後的效果對比等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在快手內部,我們從去年初開始在業務場景落地,到今年底應該會實現更大規模的落地。如果我們想在工業場景中落地Bagua,會首先對任務本身進行分析,瞭解目前該場景的性能瓶頸是讀數據IO還是分佈式訓練,針對不同的任務使用不同的優化,效果也是完全不一樣的。開源之後,我們也總結了不少經驗,內部也落地了不少相關的經驗性文檔,對於新的場景,我們往往會推薦先閱讀文檔,嘗試開關一些優化,從而找到最適合的訓練方式。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"目前,快手內部對自然語言處理、圖像識別、推薦系統等場景均有不同程度的嘗試。在自然語言處理方面,我們實現了一個GPT-2量級大小的模型,大概有65%的效率提升;圖像識別方面則有大概20%到30%的效率提升;大規模語音識別是20%到30%的提升;推薦系統的效率提升在100%以上,其特點是通訊頻率特別高,更容易發揮Bagua的優勢。這些模型都屬於數據並行在同一個GPU上可以放下整個模型的任務,接下來我們會針對超出單個GPU可控範圍的模型進行落地和優化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:在快手的案例中,多機多卡里的多卡大概是用了多少卡?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"一般來講在百張卡的比較多,後續會在更多場景和規模上進一步驗證。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:整個加速算法會影響模型準確度嗎?在實際的業務中怎麼做取捨?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我們做工具的很怕東西做出來之後,別人使用時發現還需要自己調整很多東西,這就會導致整個項目很難推動。我們目前也在做嘗試,針對接受度較高的用戶,他完全可以嘗試調節 Bagua 的各種選項,獲得更高的加速比;對於只想獲得性能收益但不想花時間調整選項的用戶,可以使用 Bagua 的默認方案,默認方案在訓練數值上與傳統方案完全一致,省去了自己調參的麻煩,同時獲得一定的訓練加速收益。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:快手現在是通過AI平臺來實現卡的資源虛擬化的?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"是的,這其中涉及到卡的調度等很多問題,我們內部基於K8s平臺進行不同資源的調度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:在一次訓練過程當中是每張卡單獨負責一個訓練模型?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"這種說法不完全準確,數據並行的方式是指同一個模型在每張卡上都會有一份,每張卡對該模型都會有一個更新的梯度,彼此之間溝通,目前Bagua就是如此,如果後面對超大模型進行並行,每個卡上可能都有模型的全部也可能只有模型的一部分。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:八卦支持Data  Parallel還是Model  Parallel?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我們目前開源部分是支持Data  Parallel,近期會加入 Model  Parallel 的支持。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:爲什麼當初選用Rust去實現?有什麼優勢嗎?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Rust最近幾年的發展速度非常快,發展趨勢也非常猛,類似Bagua這種框架對性能要求非常高,因此可選擇的編程語言相對較少,除了C、C++這兩個老牌編程語言,其他的也就只剩Rust了。對於我們團隊來說,還是比較喜歡嘗試一些新的實現方式,結果也證明Rust的效果確實很好。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Rust本身有非常成熟的異步通訊機制,比如做大量多線程的複用,一個線程做各種異步通訊,然後靈活控制通訊的收發數據。此外,整個框架設計非常多的協同操作,很容易發生內存泄漏,這種情況在C++裏面很常見,但Rust可以有效避免這些錯誤,所以我們最後選擇Rust。畢竟,如果無法保證安全,速度再快也沒用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:在開源框架層面,開發者現在也有不少選擇,您對於框架選型方面有哪些好的建議?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在開源深度學習框架上面,開發者現在確實有很多選擇,比如直接選擇 PyTorch 定義模型,並使用 PyTorch 原生的分佈式訓練方案,目前來看,PyTorch 的用戶 API 正在成爲事實標準,包括TensorFlow 2.0版本、國產的 PaddlePaddle, OneFlow, MegEngine 等都在逐漸過渡到 PyTorch 的使用方式,主要因素就是易用性上的考慮。各種框架往往都在某個細分領域具有一些獨有的優勢,開發者在選擇不同的框架時如果精力允許可以每種框架都進行一定程度的試用和了解,從而找到最適合自己落地場景的選擇。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"受這方面啓發,我們作爲訓練加速工具也在易用性上進行持續優化,比如儘量讓用戶做很少的操作就可以將 Bagua 應用到一個已有的 PyTorch 訓練腳本中,在成本很小的情況下享受到訓練加速的紅利。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"另外,我們會在功能上進行一些添加,對大模型以及非分佈式場景進行優化,也希望開源社區的小夥伴可以給我們更多的反饋,讓我們知道從哪些方面進行提升和優化來讓大家用的更加方便。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"InfoQ:除了框架本身,我們整個團隊未來還會把精力放在哪些方面?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"廉相如:"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"首先是上述提到的關於Bagua框架本身的優化依舊是我們整個團隊的重點,此外在這個過程中,我們沉澱了很多GPU通訊方面的優化經驗,以及GPU如何高效利用這些優化的經驗,很多場景都對此有需要,比如強化學習系統和推薦系統,在做到大規模之後不可避免會出現組件之間的通訊交互,這也是很多系統的瓶頸所在,是並行系統或者分佈式系統最大的痛點,我們希望目前積累下來的優化經驗可以在不同的應用場景之間做平移。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"至於整個深度學習領域,目前比較大的方向是國產深度學習框架的發展,這些框架也都在努力構建自己的生態和社區,部分功能可以和傳統框架做對標,在很多點上又有所超越,逐漸吸引自己的用戶羣。從底層來看,目前國內還是應用 NVIDIA 的GPU比較多,也有一些AI加速芯片的公司在嘗試做一些創新,這個還是很有必要的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"此外,隨着深度學習的逐漸成熟,工業界的應用也越來越多,這也催生了很多相關工作的出現,比如深度學習模型在工業場景下的部署、訓練和推理,資源管理等。在這個過程中,學術界提出了很多好的想法和思路,爲工業界的落地應用指出了很好的道路,但其實最後沉澱到工業界真正可以發揮作用的研究並沒有那麼多,工業界在這其中起到了檢驗的作用,這也是非常重要的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"目前,整個項目處於開源初期,歡迎大家加入Bagua技術交流羣,一起學習交流。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/uploader.shimowendang.com\/f\/sPDFw3SHWYBqvFrj.png!thumbnail?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJhY2Nlc3NfcmVzb3VyY2UiLCJleHAiOjE2MzQwMDIzOTksImciOiJqOHl5V3FEdktDOHBqVEtHIiwiaWF0IjoxNjM0MDAyMDk5LCJ1c2VySWQiOjE2NDY2OTQzfQ.CJiQeVF8PQJjyg5gYDc3tktXxbgLT0b-IfobMD7eUKc","alt":null,"title":null,"style":[{"key":"width","value":"25%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"項目 GitHub 地址:"},{"type":"link","attrs":{"href":"https:\/\/github.com\/BaguaSys\/bagua","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/BaguaSys\/bagua"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章