詳解Fast Run ,提高曠視MegEngine 模型推理性能的神奇功能

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"一、背景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於深度學習框架來說,網絡的訓練\/推理時間是用戶非常看中的。在實際生產條件下,用戶設計的 NN 網絡是千差萬別,即使是同一類數學計算,參數也各不相同。如果沒有針對性的優化,框架就完全喪失競爭力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,在一類數學計算中,開發者們會開發多種高效的算法,分別適用於不同的參數,以保證網絡的性能。接下來開發者們需要解決一個新問題,當計算參數確定以後,如何讓最快的算法執行該計算。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大部分框架靠先驗的經驗選擇算法,MegEngine 亦總結有優秀的先驗經驗值,實現計算時自動選擇算法。但是依靠經驗不能保證一定選擇了最快的算法。很多實際場景中,用戶希望網絡有最極致的性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲此,MegEngine 設計了專門的流程,可以爲每個計算自動選擇最快的算法,從而保證整個網絡的運行時間最短。並且同時能夠將計算的參數和其對應的算法信息以及設備信息記錄到內存或文件,當用戶再次運行網絡時,可以直接獲取性能最好的算法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這一提升性能的流程被稱爲 Fast Run,它能讓 MegEngine 的用戶運行不同的網絡時都能收穫最好的性能。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"二、Fast Run簡述"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前,主流的框架幾乎都使用了算子(Operator)的概念來抽象數學計算,如卷積算子,矩陣乘算子等。MegEngine也使用了"},{"type":"link","attrs":{"href":"http:\/\/#%E7%AE%97%E5%AD%90%EF%BC%88Operator%EF%BC%89","title":"","type":null},"content":[{"type":"text","text":"算子"}]},{"type":"text","text":"這一概念。此外,在底層,我們開發了名爲 MegDNN 的計算庫,用以完成實際的數學計算。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MegDNN 僅提供數學計算能力。MegDNN 的頂層也是按照算子的概念組織的,對不同的後端,分別封裝了 MegDNN 算子。一個 MegDNN 算子內部則可能有多個該算子的算法,MegEngine 將算法抽象爲 Algorithm,一個 Algorithm 對象可以完成該算子的計算。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以卷積算子爲例,ARM上,MegEngine 實現了非常通用的 Im2col 算法,有特定條件下性能卓越的 Winograd 算法,有在小尺寸卷積時高性能的 Direct 直接卷積算法等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"CUDA 上,有調用 cuDNN 庫函數的方法等等。從 MegEngine 算子到 MegDNN 算子再到算法的關係如下圖所示:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/b0\/63\/b081ef49280a3e69c65bd441yyd0a263.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一個 MegEngine 算子可能持有一個或多個 MegDNN 算子來完成計算,一個 MegDNN 算子需要從多個算法對象中選擇一個來執行計算。爲了極致的計算性能,需要在開始網絡計算之前,給 MegDNN 算子選好最快的算法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Fast Run 的思路很直接,在網絡計算開始之前,將每個 MegDNN 算子中所有可行的算法全部運行一次(Profiling),並將性能數據記錄下來,將最快的算法設置給 MegDNN 算子。Fast Run 成立的前提條件是算法運行時間是穩定的,這樣比較每個算法的 Profiling 數據纔有意義。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最後是確定 Fast Run 執行的時間點。MegEngine 有統一的內存管理,各 MegEngine 算子需要在計算開始前向內存規劃單元申請足夠的計算時內存,這一內存包括了其內部的 MegDNN 算子計算時需要的內存,而 MegDNN 算子計算時需要的內存完全由算法決定。這就要求,MegDNN 此刻已經確定了將要使用的算法。自然地,MegEngine 選擇在調用該接口之前執行 Fast Run 流程。這樣,當 Fast Run 流程完成時,各 MegDNN 算子都設置了性能最好的算法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Fast Run 執行的代價是顯然的,它會顯著增加第一次網絡執行的時間。Fast Run 的流程如下圖:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/de\/af\/de3dfd60696d08b00494457f76de34af.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Fast Run 有下面兩種使用方式,區別在於上圖中寫入的 Cache 文件不同:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"離線 Fast Run,離線 Fast Run分兩步,分別在不同的進程中完成。第一步先將整個網絡計算執行一遍,這一過程中,Fast Run 會將各個算法的性能數據寫到一個專門的數據結構中,最後數據被統一寫入一個Cache文件,隨後進程退出,這個過程稱之爲“搜參”。第二步,加載同樣的網絡,通過 MegEngine 的接口將 Cache 文件讀入。可以看出,離線Fast Run甚至可以在不同的設備上進行。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在線Fast Run,在線Fast Run在同一個進程完成的。前半段與離線Fast Run的流程相同,Fast Run後,各算法的性能數據保存在內存中的一個數據結構之中。此時,進程不會退出。後續可以給網絡加載不同的輸入數據,此時各MegDNN算子中已設置好性能最好的算法。並且,也可以初始化另外的網絡,亦可以像離線Fast Run的後半部分一樣,從當前的數據結構中讀取算法。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"總的來說,Fast Run提供搜參和記錄的功能。它的作用是給網絡中的各個 MegDNN 算子選擇當前參數下性能最好的算法。由於 Fast Run 對每個 MegDNN 算子執行同樣的操作,因此它在前向推理和反向傳播時都能使用。目前,MegEngine支持CUDA、CPU、ROCM 三個後端的 Fast Run ,MegEngine 的用戶們在訓練和部署時,均廣泛使用 Fast Run。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"三、Fast Run 原理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Fast Run 中,Profiling 一個 MegDNN 算子並設置算法,會經歷4個步驟,其流程如下圖示:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/a9\/69\/a93424afe16320a8f1c383e61a1a2069.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這一流程中,需要注意一些細節:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1、遞歸搜參:MegDNN 中普遍存在算子嵌套的情況。例如,Convolution 算子中,Im2col 算法會使用 MegDNN 的 MatMul 算子執行矩陣乘計算。那麼,Convolution 的性能直接受到 MatMul 性能的影響。可以看到,在 Profiling 一個 Convolution 算子之前,需要 MatMul 算子執行的性能數據已知。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了解決這個問題,Fast Run 使用了遞歸的方式,來解決搜參時的算子嵌套問題。如上圖中虛線框所示,一個 MegDNN 算子,在獲取所有可用算法之後,會調用每個算法的接口,詢問該算法是否依賴子算子並保存相關結果,若最終相關結果不爲空,則會先對子算子進行一次 Profiling,此後,再 Profiling 頂層的算子時,其使用的子算子會有最優的算法保存在 Cache 中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2、Fast Run 性能數據保存:Fast Run 性能數據存取離不開Cache。MegEngine 提供了兩種 PersistentCache,兩種 Cache 區別於數據保存的位置(內存或是文件)。Cache的結構如下圖所示:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/8b\/b3\/8bc6cffd01ed4e51aa71d16e102ac8b3.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MegEngine 中,PersistentCache 對象是單例的,兩種 Cache 都保證線程安全。Cache 維護一個從 category 信息到一個集合的映射的集合,此處 "},{"type":"link","attrs":{"href":"http:\/\/#L144","title":"","type":null},"content":[{"type":"text","text":"category"}]},{"type":"text","text":"******是一個後端的記錄信息。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Category 是一個字符串,由後端信息和算子類型拼接獲得,"},{"type":"link","attrs":{"href":"http:\/\/#L109","title":"","type":null},"content":[{"type":"text","text":"後端信息"}]},{"type":"text","text":"由設備區分,例如 CUDA 的後端信息由設備名稱、NVIDIA 驅動版本和 CUDA 運行時庫版本信息組成;CPU 作爲後端時,則只記錄設備名稱。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MegEngine 中只有 CUDA、CPU、ROCM 三種類型有對應的 categoty 生成,這也是 MegEngine 目前僅支持在 CUDA、CPU、ROCM 三個後端支持 Fast Run 的原因。"},{"type":"link","attrs":{"href":"http:\/\/#L52","title":"","type":null},"content":[{"type":"text","text":"算子類型"}]},{"type":"text","text":"由算子名稱、Cache 版本信息兩部分組成。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一個 category 映射到一個集合,該集合維護單個 MegDNN 算子的信息到其所有可用算法的 Profiling 結果的映射。該集合的 "},{"type":"link","attrs":{"href":"http:\/\/#L251","title":"","type":null},"content":[{"type":"text","text":"key值"}]},{"type":"text","text":"******由 MegDNN 算子的所有輸入 "},{"type":"link","attrs":{"href":"http:\/\/#%E5%BC%A0%E9%87%8F%EF%BC%88Tensor%EF%BC%89","title":"","type":null},"content":[{"type":"text","text":"Tensor"}]},{"type":"text","text":" 的尺寸和算子的全部參數組成(這些參數能夠完全決定一個算法是否可用)。"},{"type":"link","attrs":{"href":"http:\/\/#L199","title":"","type":null},"content":[{"type":"text","text":"value值"}]},{"type":"text","text":"******是一個數組,保存每個 Profiling 過的算法的時間、所需額外的空間等信息,並排序。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"排序時,以運行時間進行升序排列,並且保證了序列中每個算法使用的內存必須小於其前一個算法使用的內存 – 這樣序列中不存在一個算法既慢於另一個算法,又使用更多的內存。一個 Cache 中可以存在不同後端的 Fast Run 結果,只要它們的 category 不同。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在一些常見的模型上,推理時關閉和開啓 Fast Run,性能表現如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/6b\/03\/6b7e731f3c960ae96c19aa335f6b6003.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從工程落地中 Fast Run 的使用情況來看,絕大部分場景下,能顯著降低網絡運行時間。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"四、Fast Run 使用"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MegEngine 可配置的參數衆多,很多都是工程落地的解決方法,在工業上經過大量的實踐。其中一些參數與 Fast Run 的使用有密切的關係,這裏詳細闡述它們的使用。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4.1 開啓 Fast Run"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"源代碼級別使用 Fast Run 可以參照 MegEngine 自帶的可執行程序 "},{"type":"link","attrs":{"href":"http:\/\/#L721","title":"","type":null},"content":[{"type":"text","text":"load_and_run"}]},{"type":"text","text":",如果僅關注利用 load_and_run 測試模型,有下面兩個參數需要使用:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. --full-run\/--fast-run,搜參的兩種模式,需用戶選擇其中一種模式,兩者的區別在於 Profiling 時,生成的 MegDNN 算子的可用算法集大小不同。--full-run 時,會 Profiling MegDNN算子內所有的可用算法,包括最樸素的算法(MegDNN 算子至少有一個算法保證任何參數下均可用,運行慢)。--fast-run 則會排除樸素算法。如果想要減少 Profiling 的時間開銷,可以選擇使用 --fast-run 模式,此時需要注意的是,如果網絡中有參數過於特殊的算子,則該算子可能面臨沒有可用算法的情況(優化過的算法不可用、樸素的算法被排除),此時 MegEngine 會報出“沒有可用算法”的錯誤並退出。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. --fast-run-algo-policy,指定 Cache 文件的路徑,文件中的性能數據會被讀入內存,被全局唯一的 PersistentCache 對象持有。進程退出前,PersistentCache 中的性能數據會全部寫入該文件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"兩個參數可以單獨使用,也可以一起使用:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 單獨使用--full-run\/--fast-run,Profiling 數據保存在內存中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. 兩者一起使用,文件中的性能數據首先會被讀入內存。如果文件爲空,所有 MegDNN 算子完成搜參後,性能數據寫回文件。如果文件不爲空,且某個 MegDNN 算子能從 Cache 中查詢到性能數據,則不會進行搜參,餘下不能查到性能數據的,則會搜參。這樣實現了斷點搜參的功能,MegEngine 稱之爲“續搜“。如果 Fast Run 時程序因爲某些原因異常退出,”續搜“能使 Fast Run 在下一次能夠連上。“續搜”也能讓多個模型的性能數據可以合併在一個 Cache 文件中。如果所有 MegDNN 算子都能從 Cache 中查到性能數據,則搜參不會發生,網絡具有最好的性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3. 單獨使用--fast-run-algo-policy,文件中的性能數據首先會被讀入內存,如果 Cache 中沒有記錄,不“續搜”,以經驗值設置  MegDNN 算子的算法,性能可能不是最優。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在使用 Fast Run 時,可以配合 --verbose 一起使用,程序將詳細打印 Fast Run 時的調試信息,包括 MegDNN 算子的名稱,輸入輸出的尺寸信息,設置的算法名稱等。如果發現性能不符合預期,比如當加載的模型和 Cache 文件不匹配時,通常會發生“續搜”,造成網絡執行時間很長的假象。因此,我們強烈推薦在此時使用 --verbose 參數來觀察程序工作是否符合預期。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4.2 算法屬性"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MegDNN 中某些算法具有獨特的屬性,會影響向 MegDNN 算子設置算法,當前使用的"},{"type":"link","attrs":{"href":"http:\/\/#L107","title":"","type":null},"content":[{"type":"text","text":"屬性"}]},{"type":"text","text":"有:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. REPRODUCIBLE:具有 REPRODUCIBLE 屬性的算法,可保證計算結果比特對齊。Fast Run 中,在從 Cache 中讀算法信息時提供了對 REPRODUCIBLE 屬性的支持。設置 --reproducible,Fast Run 會從 Cache 中選擇性能最好的且具有 REPRODUCIBLE 屬性的算法。在 Profiling 階段,並不區分算法是否 REPRODUCIBLE,這樣 Cache 中的算法既有 REPRODUCIBLE 屬性的,也有非 REPRODUCIBLE 屬性的,具備一定的泛用性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. NAIVE:只有 MegDNN 中最樸素的算法具有 NAIVE 屬性。--full-run 和 --fast-run 的區別就在於 --fast-run 通過該屬性篩除了運行最慢的樸素算法。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4.3 weight前處理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有些算法,在計算時需要對數據進行輔助轉換。其中,對權重 weight 的轉換可以是一次性的,這樣可以節省運行時間。例如 Winograd 算法,其權重可以在進行卷積計算之前轉好,節約相當一部分運行時的性能開銷。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MegEngine 在 GraphCommonOptimizeOptions 中提供了 weight_preprocess 選項來支持部署時權重的提前轉換功能。一旦設置 weight_preprocess,對於那些weight能夠提前轉換的算法,其性能數據將不會包含權重轉換的時間。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"簡單的說,在搜參階段設置 weight_preprocess,會影響算法的性能數據,從而 Cache 中算法的性能數據排序可能不同。如果 Cache 是在開啓weight 前處理的情況下搜參得到,部署時務必要開啓 weight 前處理以獲得更好的性能,否則有性能下降的風險。Fast Run 與 weight 前處理不是必需的關係,兩者可以分開使用。不過通常情況下,兩者結合使用可以獲得更好的性能。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4.4 Fast Run 版本"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Fast Run 的版本信息以字符串的形式表示在 Cache 的 category 中。Cache 具有兼容性,可以允許不同的版本的 MegEngine 下的搜參結果集合在同一個 Cache 中,Cache 中看到的是不同的 category。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但是用戶在使用過程,依然需要注意 Fast Run 的版本。一般地,如果 MegDNN 的算法發生了刪除或者是屬性的變動,Fast Run 的版本信息會發生變化。Fast Run 版本信息變化後,需要重新搜參。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章