快手八卦!突破TensorFlow、PyTorch並行瓶頸的開源分佈式訓練框架來了!

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近日,"},{"type":"link","attrs":{"href":"https:\/\/www.kuaishou.com","title":"xxx","type":null},"content":[{"type":"text","text":"快手"}]},{"type":"text","text":"和蘇黎世理工宣佈開源分佈式訓練框架 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/BaguaSys\/bagua","title":"xxx","type":null},"content":[{"type":"text","text":"Bagua(八卦)"}]},{"type":"text","text":",相比於 PyTorch、TensorFlow 等現有深度學習開源框架僅針對系統層面進行優化,Bagua 突破了這一點,專門針對分佈式場景設計了特定的優化算法,實現了算法和系統層面的聯合優化,性能較同類提升 60%。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"研發背景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨着摩爾定律的失效,單個計算單元的能力已經遠遠無法滿足數據的指數級增長。比如,快手每天上傳的新視頻超過千萬條,即便訓練簡單的分類模型(比如 ResNet),使用單機單卡的算力,訓練快手日內新增視頻都需要超過一百天的時間。因此,在數據爆炸性增長的互聯網行業,多機多卡的並行訓練成爲了大數據時代的必然。隨着深度學習模型功能的日益強大,分佈式訓練任務的通信成本和所需算力也隨之急劇增長。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然而,由於多機多卡並行帶來的額外通訊成本,加速比(speedup)經常讓大家失望,"},{"type":"text","marks":[{"type":"strong"}],"text":"從而形成了大廠“堆資源”,沒資源的“乾瞪眼”的局面"},{"type":"text","text":"。比如,Google 的 Downpour 框架 [1] 使用 80 個 GPU 訓練 ImageNet,加速比卻只有 12\/80=15%。因此如何提升多機多卡中訓練的通訊效率成爲了並行訓練乃至解決數據爆炸性增長的核心問題之一。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"項目 GitHub 地址:"},{"type":"link","attrs":{"href":"https:\/\/github.com\/BaguaSys\/bagua","title":"","type":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"https:\/\/github.com\/BaguaSys\/bagua"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"現有的深度學習開源框架(PyTorch,TensorFlow)主要針對系統層面優化,把已有的單機單卡優化算法擴展到多機多卡的場景。雖然系統層面的優化使得並行效率不斷提升,但是邊際效益卻越來越明顯。"},{"type":"text","marks":[{"type":"strong"}],"text":"針對這個問題,快手和蘇黎世理工(ETH Zürich)聯合開發了一款名爲“Bagua”的分佈式訓練框架,突破單純的系統層面優化,專門針對分佈式的場景設計特定的優化算法,實現算法和系統層面的聯合優化,極致化分佈式訓練的效率。"},{"type":"text","text":"用戶只需要添加幾行代碼,便能把單機單卡訓練擴展到多機多卡訓練並得到非常可觀的加速比。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Bagua 設計思路"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從單機單卡的訓練到多機多卡訓練的核心,是每個卡把自己的計算結果進行累加和傳播。這個過程好比每個人把自己知道的信息傳遞給他人,然後又從其他人那裏獲取信息,最後完成全局的信息同步。如果把計算單元之間的信息同步類比爲人與人之間的信息同步,那麼社會實踐經驗告訴我們,“八卦”可能是消息傳遞最高效的模式。“八卦”消息傳播具有去中心化、異步通訊、信息壓縮的特點,這與 Bagua 裏面實現的通訊算法剛好一一呼應。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了提升分佈式訓練效率,Bagua 實現了自研以及前沿的算法,包括去中心化\/中心化、同步\/異步以及通訊壓縮等基礎通訊組件,通過軟硬結合的設計極致優化了這些組件的效率,並且靈活支持這些算法的組合,以及更復雜的算法設計。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Bagua 將通信過程抽象成了如下的算法選項:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"中心化或是去中心化(Centralized or Decentralized):在中心化的通訊模式中,梯度或模型的同步過程需要所有的工作節點進行參與,因此,較高的網絡延時往往會導致訓練效率的降低。去中心化的通信模式 [5,6] 往往可以有效的解決這一問題:在該模式下,工作節點可以被連接成特定的拓撲結構(例如環),在通信過程中,每一個工作節點只與和它相鄰的節點進行通信。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同步或是異步(Synchronous or Asynchronous):同步模式中,在每一次迭代過程中,所有工作節點都需要進行通信,並且下一步迭代必須等待當前迭代的通信完成才能開始。反之,異步式分佈算法 [2] 則不需要等待時間:當某個節點完成計算後就可直接傳遞本地梯度,進行模型更新。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"完整精度模式或信息壓縮模式(Full-Precision or Low-Precision):完整精度模式,會使用與本地模型相同的 32 位浮點數(float32)進行傳輸。另一方面,在通訊存在瓶頸的情況下,基於大量已有研究通過量化 (quantization [3]) 或稀疏化 (sparsification [4]) 等方法壓縮梯度,再用壓縮後的梯度更新參數。在很多場景下,可以達到和完整精度相同的精度,同時提升通訊效率。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"雖然爲了提升通訊效率,Bagua 沒有依照傳統的方式同步所有計算節點的結果,甚至每次同步的信息還有偏差,但是得益於最新理論上的進展,這幾種通訊策略以及他們的組合最終收斂解的正確性和效率仍然能得到充分保證,而且計算複雜度跟同步中心化和信息無損的方法相當,但是通訊效率更高 [10]。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/60\/60303238cd9efe3c9362175c45692dba.gif","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"值得注意的是,在實踐中,分佈式訓練算法往往會使用不止一種上述的優化方法,從而適配更爲極端的網絡環境 [7,8,9]。對於分佈式算法感興趣的讀者,我們在這裏推薦一份最新的完整綜述報告 [10]。Bagua 提供了一套詳盡的通信模式來支持用戶在上述模式中任意選擇組合,我們將這一分佈式訓練系統對於上述算法選項的支持情況總結在下表中:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/e9\/e945069061f75ed2856f1d056554d2d0.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從表格中不難看出,現有框架的優化只是針對較爲通用的算法(中心化同步完整精度),對於其他的算法組合,這些系統的支持非常有限。對於中心化同步進行信息壓縮,這些系統往往只能支持較爲簡單的 float32->float16 壓縮,相較而言,Bagua 則可以支持更爲複雜的 ByteGrad,QAdam 等算法。對於其他的算法組合,現有的框架通常無法支持,而 Bagua 則可以自由支持。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然而,簡單地支持這項算法選項並不能直接在大規模集羣上帶來性能的提升。Bagua 的核心優勢在於,爲了追求極致化的性能,而實現算法和實現的聯合優化。具體來講,基於上述的通信層抽象,用戶既可以方便得選擇系統提供的各種算法組合從而獲得性能提升,又能靈活得實現新的分佈式 SGD 算法 —— Bagua 將自動爲這一算法實現提供系統層優化。這些系統優化包含:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"將通訊時間隱藏在計算時間中:爲了降低通信開銷,Bagua 能夠將部分通信時間隱藏在計算時間中。具體來講,在反向梯度的計算過程中,部分已經完成的梯度可以在剩餘梯度的計算過程中同時進行通信——通過這種流水的處理方式,部分通信時間可以被有效地“隱藏”在反向梯度的計算過程中,從而減小數據並行帶來的通信開銷。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"參數分桶及其內存管理:頻繁的傳輸碎片化數據,會降低通信的效率。因此,Bagua 將模型參數分割成桶,並且分配連續的內存空間來對每一個桶進行管理,這樣通訊的單位就變成了桶,從而能夠更高效地利用通信模型。此外,由於支持了信息壓縮算法,對於壓縮和解壓的函數,其操作的基本單位也是桶,這樣也能使得這些操作的開銷降低。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"分層化的通信實現:由於工業級別的分佈式訓練往往需要多機多卡,而不同物理連接方式所帶來的延時和帶寬也有較大差異,因此,通訊的有效抽象也對性能的提升至關重要。Bagua 將涉及多機的通信抽象成:“機內”和“機間”,並對於相應的通信抽象做了優化。例如,對於信息壓縮傳輸,分層化通訊將會把這一算法解讀成“機內”完整精度,“機間”信息壓縮,從而爲不同的物理鏈接提供最合適的通信算法。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/77\/771a64c7b043047ace2ebb4c0853fa39.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們想要強調的是,這些系統實現層面的優化是對於各種算法組合廣泛適用,而非侷限在某一特定的算法設置上。因此,所有的系統優化都可以被靈活的複用到各種算法實現中去,這在保證“端到端”的性能提升的同時,也爲開發新的分佈式算法提供了良好的平臺。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"經過實驗,Bagua 的特點如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"並行性能顯著提高:在 ImageNet 數據集上,相較當前開源分佈式框架(PyTorch-DDP,Horovod,BytePS),當配置同樣的算力(128GPU)與通信網絡(100Gbps),達到相同的訓練精度,Bagua 只需其他框架 80% 左右的時間;"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對網絡環境更魯棒:由於有效的支持了各類算法優化(信息壓縮,異步,和去中心化),Bagua 在各類網絡環境下(包括不同延時和帶寬)都體現出了良好的適配性。尤其是在高延遲低帶寬的情況下,Bagua 體現出比其他框架更優的加速比,比如:在 10Gbps 網絡帶寬環境下,同樣的 ImageNet 任務,Bagua 只需其他框架 50% 左右的訓練時間來達到同樣的訓練精度;"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"“一鍵式”使用:Bagua 對於端用戶非常友好,現有利用 PyTorch 的模型都可以作爲 Bagua 的輸入,Bagua 將自動爲其提供豐富的並行方案——只需增加幾行代碼,訓練就可以運行在分佈式集羣上;"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"分佈式通訊算法易拓展性:Bagua 提供了針對算法的高拓展性,對於分佈式優化算法的開發者,Bagua 提供了有效的通訊抽象,開發者實現的新算法也可以直接複用 Bagua 的系統優化;"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可用於工業級場景大規模使用:Bagua 爲 Kubernetes 實現了定製化的 operator,支持雲原生部署,同時考慮機器資源和故障問題,有機結合 PyTorch Elastic 和 Kubernetes 實現了容災功能和動態訓練擴縮容。用戶可以通過使用 Bagua ,在少量機器空閒時就開始訓練,在更多機器資源釋放的情況下,訓練任務自動擴容到更多機器。同時機器節點損壞時,自動剔除壞節點繼續訓練。方便工業級訓練場景使用,也方便與機器學習平臺結合使用;"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"安全、故障易排查:Bagua 通訊後端由注重內存安全、速度和併發性的 Rust 語言實現,在編譯期就排除了大量的內存安全問題。同時基於 tracing 實現了分模塊、分層級的 log 輸出,使得實際場景中故障排查更加輕鬆。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,Bagua 在快手內部也經過了工業級任務的實踐檢驗,Bagua 已經在快手內部多個核心業務場景投入使用,相較其他開源框架取得了顯著的性能提升:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大規模自然語言處理(GPT2-xl 量級大小的模型),提升效率 65%"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大規模圖像識別(10+ 億圖像 \/ 視頻),提升效率 20%~30%"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大規模語音識別(TB 級別語音資料),提升效率 20%~30%"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大規模推薦系統(萬億級別參數模型訓練,支持億級別 DAU 的應用),提升效率 100% 以上"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Bagua 和其他開源方案的性能對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"快手選用了包括圖像,文本,語音,圖像文本混合等多個訓練任務對 Bagua 的性能進行測試,並與 PyTorch-DDP,Horovod,BytePS 進行比較。得益於 Bagua 系統的高效性和算法的多樣性,Bagua 可以在不同任務中選用相應最優的算法,從而保證在訓練精度和其他系統持平的前提下,訓練速度明顯提高。值得注意的是,當網絡狀況不佳時,Bagua 系統的優勢將更加明顯。下面我們選取 GPT2-XL,BERT-Large 和 VGG16 三個典型的通信密集型任務進行對比說明,更多結果可在 Bagua 論文和網站中("},{"type":"link","attrs":{"href":"https:\/\/github.com\/BaguaSys\/bagua%EF%BC%89%E8%BF%9B%E4%B8%80%E6%AD%A5%E4%BA%86%E8%A7%A3%E3%80%82","title":"","type":null},"content":[{"type":"text","text":"https:\/\/github.com\/BaguaSys\/bagua)進一步瞭解。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1. End-to-end 訓練時間對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下圖展示了在 128 個 V100 GPU 上 fine-tune BERT-Large (SQuAD 數據集),模型 F1 精度隨訓練時間的變化曲線。Bagua 使用 QAdam-1bit 算法加速,機器之間採用 100Gbps TCP\/IP 網絡互聯。我們可以看到,即使在高速網絡下,達到相同的訓練精度,Bagua 需要的時間僅爲其他系統的 60%。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/9f\/9f013dbdc6408f66e393a358580f8077.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2. 擴展性對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下圖展示了各個系統在 VGG16 模型(ImageNet 數據集)上的訓練速度與 GPU 數量之間的關係。在測試中分別使用了 1,8,16,32,64,128 個 V100 GPU 進行測試。該任務中 Bagua 使用 8bitsGrad 算法加速。可以看出 Bagua 的擴展效率相比其他系統有較明顯的提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/bc\/bc222acc87947bfb57fb8f5bde9fc5b9.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下圖展示了各個系統在 GPT2-XL 模型上的訓練速度與 GPU 數量之間的關係。GPT2-XL 模型有 1.5 billion 參數,在大模型中具有一定代表性。Bagua 使用 8bitsGrad 算法加速。在測試中分別使用了 8,16,32,64,80 個 V100 GPU 進行測試。同樣可以看出 Bagua 的擴展效率相比其他系統有較明顯的提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/69\/69430626fbcc0821ef13ceb48fac82e2.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"3. 不同網絡環境對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"算法是 Bagua 的靈魂。當網絡環境變化時,不同的算法會表現出不同的性能特點。在下圖中,我們以 BERT-Large fine-tune 爲例,調整機器之間網絡的帶寬和延遲,對比 Bagua 中各個算法的 epoch 時間。可以看出,隨着帶寬降低,壓縮算法的優勢會越來越明顯,且和壓縮程度相關;當延遲逐漸升高,去中心化算法逐漸展現出優勢。除此之外,當網絡環境不佳時,Bagua 對比其他系統的優勢也進一步擴大。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/c2\/c2a1dee3adf59ed153b5b56e98bdf1c8.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/3d\/3d3d89595c29755875fc34d9285350cf.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"Bagua 使用實例"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在已有的訓練腳本中使用 Bagua 非常簡單,在代碼中算法使用者只需要增加如下幾行代碼對已有模型進行初始化操作即可。以使用 GradientAllReduce 算法爲例:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先,我們需要 import 一下 bagua"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"null"},"content":[{"type":"text","text":"import bagua.torch_api as bagua\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨後,我們可以初始化 Bagua 的進程組:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"null"},"content":[{"type":"text","text":"torch.cuda.set_device(bagua.get_local_rank())bagua.init_process_group()\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於數據集的初始化,Bagua 完全兼容 PyTorch 的實現:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"null"},"content":[{"type":"text","text":"train_dataset = ...test_dataset = ...train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset, num_replicas=bagua.get_world_size(), rank=bagua.get_rank())train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, shuffle=(train_sampler is None), sampler=train_sampler,)test_loader = torch.utils.data.DataLoader(test_dataset, ...)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最後,用戶只需要選擇要訓練的模型和優化器即可以使用 bagua:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"null"},"content":[{"type":"text","text":"# 定義模型model = ...model = model.cuda()# 定義優化器optimizer = ...# 選擇 Bagua 算法來使用from bagua.torch_api.algorithms import gradient_allreduce# 實例化 Bagua 算法algorithm = gradient_allreduce.GradientAllReduceAlgorithm()# 對現有模型啓用 Bagua 算法model = model.with_bagua( [optimizer], algorithm)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這樣,使用 Bagua 的多機多卡訓練算法就實現完成了。完整例子和更多場景,歡迎參考 Bagua Tutorial 文檔("},{"type":"link","attrs":{"href":"https:\/\/github.com\/BaguaSys\/bagua%EF%BC%89%E3%80%82","title":"","type":null},"content":[{"type":"text","text":"https:\/\/github.com\/BaguaSys\/bagua)。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"論文: "},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/2107.01499","title":"","type":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"https:\/\/arxiv.org\/abs\/2107.01499"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"項目 GitHub 地址:"},{"type":"link","attrs":{"href":"https:\/\/github.com\/BaguaSys\/bagua","title":"","type":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"https:\/\/github.com\/BaguaSys\/bagua"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"參考文獻"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[1] Dean, Jeffrey, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao et al. “Large scale distributed deep networks.” (2012)."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[2] Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Peter Glynn, Yinyu Ye, Li-Jia Li, and Li Fei-Fei. 2018. Distributed asynchronous optimization with unbounded delays: How slow can you go?. In International Conference on Machine Learning. PMLR, 5970–5979."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[3] DanAlistarh, DemjanGrubic, JerryLi, RyotaTomioka, and MilanVojnovic. 2016. QSGD: Communication-efficient SGD via gradient quantization and encoding. arXiv preprint arXiv:1610.02132 (2016)."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[4] Dan Alistarh, Torsten Hoefler, Mikael Johansson, Sarit Khirirat, Nikola Konstanti- nov, and Cédric Renggli. 2018. The convergence of sparsified gradient methods. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 5977–5987."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[5] Anastasia Koloskova, Sebastian Stich, and Martin Jaggi. 2019. Decentralized stochastic optimization and gossip algorithms with compressed communication. In International Conference on Machine Learning. PMLR, 3478–3487."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[6] Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu. 2017. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 5336–5346."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[7] Christopher De Sa, Matthew Feldman, Christopher Ré, and Kunle Olukotun. 2017. Understanding and optimizing asynchronous low-precision stochastic gradient descent. In Proceedings of the 44th Annual International Symposium on Computer Architecture. 561–574."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[8] Xiangru Lian, Wei Zhang, Ce Zhang, and Ji Liu. 2018. Asynchronous decentral- ized parallel stochastic gradient descent. In International Conference on Machine Learning. PMLR, 3043–3052."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[9] Hanlin Tang, Shaoduo Gan, Ce Zhang, Tong Zhang, and Ji Liu. 2018. Com- munication compression for decentralized training. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 7663–7673."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[10] Ji Liu, Ce Zhang, et al. 2020. Distributed Learning Systems with First-Order Methods. Foundations and Trends® in Databases 9, 1 (2020), 1–100."}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章