模型壓縮經典文章翻譯1:(Network Slimming翻譯)Network Slimming-Learning Efficient Convolutional Networks ...

一、番外說明

大家好,我是小P,今天給大家帶來深度模型壓縮經典文獻翻譯的第一篇,關於通道剪枝的文章,中英對照。此外,對“目標檢測/模型壓縮/語義分割”感興趣的小夥伴,歡迎加入QQ羣 813221712 討論交流,進羣請看羣公告!
點擊鏈接加入羣聊【Object Detection】:https://jq.qq.com/?_wv=1027&k=5kXCXF8
說明:由於CSDN排版的原因,此處就只上傳了一部分樣例,完整版word文檔請百度雲盤下載或進羣羣文件下載,敬請諒解!
廣告一下:有對翻譯經典文獻感興趣的小夥伴可以私聊我!
百度雲鏈接:鏈接:https://pan.baidu.com/s/1ACOOVH6CHV1aeNdIO81MWA 提取碼:esr7

二、正文翻譯

Learning Efficient Convolutional Networks through Network Slimming
Zhuang Liu1∗ Jianguo Li2 Zhiqiang Shen3 Gao Huang4 Shoumeng Yan2 Changshui Zhang1
1CSAI, TNList, Tsinghua University 2Intel Labs China 3Fudan University 4Cornell University {liuzhuangthu, zhiqiangshen0214}@gmail.com, {jianguo.li, shoumeng.yan}@intel.com, [email protected], [email protected]
Abstract
The deployment of deep convolutional neural networks (CNNs) in many Real-World applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.
在許多實際應用中部署深度卷積神經網絡(CNN)很大程度上受到其計算成本高的限制。在本文中,我們提出了一種新的CNNs學習方案,能同時1)減小模型大小; 2)減少運行時內存佔用; 3)在不影響準確率的情況下降低計算操作的數量。這種學習方案是通過在網絡中進行通道層次稀疏來實現,簡單而有效。與許多現有方法不同,我們所提出的方法直接應用於現代CNN架構,引入訓練過程的開銷最小,並且所得模型不需要特殊軟件/硬件加速器。我們將我們的方法稱爲網絡瘦身(network slimming),此方法以大型網絡作爲輸入模型,但在訓練過程中,無關緊要的通道被自動識別和剪枝,從而產生精度相當但薄而緊湊(高效)的模型。在幾個最先進的CNN模型(包括VGGNet,ResNet和DenseNet)上,我們使用各種圖像分類數據集,憑經驗證明了我們方法的有效性。對於VGGNet,網絡瘦身後的多通道版本使模型大小減少20倍,計算操作減少5倍。



在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章