易 AI - ResNet 論文深度講解

原文:https://makeoptim.com/deep-learning/yiai-paper-resnet

論文地址

https://arxiv.org/pdf/1512.03385.pdf

閱讀方式

本文采用原文、翻譯、記錄的排版。

筆者使用如何閱讀深度學習論文的方法進行閱讀,文中標註的 @1(第一步)、@2、@3、@4 分別表示在第該步閱讀中的記錄和思考

注:爲了加深理解,大家可以根據使用 TensorFlow 2 Keras 實現 ResNet 網絡實踐 ResNet 網絡。

Deep Residual Learning for Image Recognition

圖像識別的深度殘差學習

@1 本論文介紹深度殘差圖像識別的運用,可以猜到深度殘差就是本文論的核心

Abstract

摘要

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [41] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

更深的神經網絡更難訓練。我們提出了一種殘差學習框架來減輕網絡訓練,這些網絡比以前使用的網絡更深。我們明確地將層變爲學習關於層輸入的殘差函數,而不是學習未參考的函數。我們提供了全面的經驗證據說明這些殘差網絡很容易優化,並可以顯著增加深度提高準確性。在 ImageNet 數據集上我們評估了深度高達 152 層的殘差網絡——比 VGG[41]深 8 倍但仍具有較低的複雜度。這些殘差網絡的集合在 ImageNet 測試集上取得了 3.57% 的錯誤率。這個結果在 ILSVRC 2015 分類任務上贏得了第一名。我們也在 CIFAR-10 上分析了 100 層和 1000 層的殘差網絡。

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

對於許多視覺識別任務而言,表示的深度是至關重要的。僅由於我們非常深度的表示,我們便在 COCO 目標檢測數據集上得到了 28% 的相對提高。深度殘差網絡是我們向 ILSVRCCOCO 2015 競賽提交的基礎,我們也贏得了 ImageNet 檢測任務,ImageNet 定位任務,COCO 檢測和 COCO 分割任務的第一名。

@1 摘要中指出更深的神經網絡更難訓練,而作者提出的深度殘差網絡可以解決這個問題,從而可以通過顯著增加深度提高準確性。並且,深度殘差網絡在幾次大賽中都獲得了第一名的成績。

1 Introduction

1 簡介

Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21, 50, 40]. Deep networks naturally integrate low/mid/high-level features [50] and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [41, 44] reveals that network depth is of crucial importance, and the leading results [41, 44, 13, 16] on the challenging ImageNet dataset [36] all exploit “very deep” [41] models, with a depth of sixteen [41] to thirty [16]. Many other non-trivial visual recognition tasks [8, 12, 7, 32, 27] have also greatly benefited from very deep models.

深度卷積神經網絡[22, 21]造就了圖像分類[21, 49, 39]的一系列突破。深度網絡自然地將低/中/高級特徵[49]和分類器端到端多層方式進行集成,特徵的“級別”可以通過堆疊層的數量(深度)來豐富。最近的證據[40, 43]顯示網絡深度至關重要,在具有挑戰性的 ImageNet 數據集上領先的結果都採用了“非常深”[40]的模型,深度從 16 [40]到 30 [16]之間。許多其它重要的視覺識別任務[7, 11, 6, 32, 27]也從非常深的模型中得到了極大受益。

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1, 9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 9, 37, 13] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with back-propagation [22].

Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.

深度重要性的推動下,出現了一個問題:學些更好的網絡是否像堆疊更多的層一樣容易?回答這個問題的一個障礙是梯度消失/爆炸[14, 1, 8]這個衆所周知的問題,它從一開始就阻礙了收斂。然而,這個問題通過標準初始化[23, 8, 36, 12]和中間標準化層[16]在很大程度上已經解決,這使得數十層的網絡能通過具有反向傳播的隨機梯度下降(SGD)開始收斂。

圖 1. 具有 20 層和 56 層“普通”網絡的 CIFAR-10 上的訓練誤差(左)和測試誤差(右)。更深的網絡具有更高的訓練誤差,從而具有更高的測試誤差ImageNet 上的類似現象如圖 4 所示。

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11, 42] and thoroughly verified by our experiments. Fig. 1 shows a typical example.

當更深的網絡能夠開始收斂時,暴露了一個退化問題:隨着網絡深度的增加,準確率達到飽和(這可能並不奇怪)然後迅速下降。意外的是,這種下降不是由過擬合引起的,並且在適當的深度模型上添加更多的層會導致更高的訓練誤差,正如[10, 41]中報告的那樣,並且由我們的實驗完全證實。圖 1 顯示了一個典型的例子。

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).

退化(訓練準確率)表明不是所有的系統都很容易優化。讓我們考慮一個較淺的架構及其更深層次的對象,爲其添加更多的層。存在通過構建得到更深層模型的解決方案:添加的層是恆等映射,其他層是從學習到的較淺模型的拷貝。這種構造解決方案的存在表明,較深的模型不應該產生比其對應的較淺模型更高的訓練誤差。但是實驗表明,我們目前現有的解決方案無法找到與構建的解決方案相比相對不錯或更好的解決方案(或在合理的時間內無法實現)。

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F (x) := H(x) − x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

Figure 2. Residual learning: a building block.

在本文中,我們通過引入深度殘差學習框架解決了退化問題。我們明確地讓這些層擬合殘差映射,而不是希望每幾個堆疊的層直接擬合期望的基礎映射。形式上,將期望的基礎映射表示爲 H(x),我們將堆疊的非線性層擬合另一個映射 F(x) := H(x) − x。原始的映射重寫爲 F(x)+x。我們假設殘差映射比原始的、未參考的映射更容易優化。在極端情況下,如果一個恆等映射是最優的,那麼將殘差置爲零比通過一堆非線性層來擬合恆等映射更容易。

圖 2. 殘差學習:構建塊。

The formulation of F (x) + x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 34, 49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.

公式 F (x) + x 可以通過帶有“快捷連接”的前向神經網絡(圖 2)來實現。快捷連接[2, 33, 48]是那些跳過一層或更多層的連接。在我們的案例中,快捷連接簡單地執行恆等映射,並將其輸出添加到堆疊層的輸出(圖 2)。恆等快捷連接既不增加額外的參數不增加計算複雜度。整個網絡仍然可以由帶有反向傳播的 SGD 進行端到端的訓練,並且可以使用公共庫(例如,Caffe [19])輕鬆實現,而無需修改求解器。

We present comprehensive experiments on ImageNet [36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我們在 ImageNet[36]上進行了綜合實驗來顯示退化問題並評估我們的方法。我們發現:1)我們極深的殘差網絡易於優化,但當深度增加時,對應的“簡單”網絡(簡單堆疊層)表現出更高的訓練誤差;2)我們的深度殘差網絡可以從大大增加的深度中輕鬆獲得準確性收益,生成的結果實質上比以前的網絡更好。

Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

CIFAR-10 數據集上[20]也顯示出類似的現象,這表明了優化的困難以及我們的方法的影響不僅僅是針對一個特定的數據集。我們在這個數據集上展示了成功訓練的超過 100 層的模型,並探索了超過 1000 層的模型。

On the ImageNet classification dataset [36], we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

ImageNet 分類數據集[35]中,我們通過非常深的殘差網絡獲得了很好的結果。我們的 152 層殘差網絡是 ImageNet 上最深的網絡,同時還具有比 VGG 網絡[40]更低的複雜性。我們的模型集合在 ImageNet 測試集上有 3.57% top-5 的錯誤率,並在 ILSVRC 2015 分類比賽中獲得了第一名。極深的表示在其它識別任務中也有極好的泛化性能,並帶領我們在進一步贏得了第一名:包括 ILSVRC & COCO 2015 競賽中的 ImageNet 檢測,ImageNet 定位,COCO 檢測和 COCO 分割。堅實的證據表明殘差學習準則是通用的,並且我們期望它適用於其它的視覺和非視覺問題。

@2 從簡介部分可以瞭解到,更深的網絡面臨着梯度消失/爆炸這個退化問題,並且不是由過擬合引起。作者提出通過深度殘差(恆等映射、快捷連接)來解決這個退化問題,並且既不增加額外的參數不增加計算複雜度,使得網絡易於優化,提高了泛化性能。同時,作者在多個數據集中的實踐也表明殘差學習準則是通用的不侷限於特定的數據集,也不一定侷限於視覺問題

2 Related Work

2 相關工作

Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 48]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.

殘差表示。在圖像識別中,VLAD[18]是一種通過關於字典的殘差向量進行編碼的表示形式,Fisher 矢量[30]可以表示爲 VLAD概率版本[18]。它們都是圖像檢索和圖像分類[4,47]中強大的淺層表示。對於矢量量化,編碼殘差矢量[17]被證明比編碼原始矢量更有效。

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [45, 46], which relies on variables that represent residual vectors between two scales. It has been shown [3, 45, 46] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

在低級視覺和計算機圖形學中,爲了求解偏微分方程(PDE),廣泛使用的 Multigrid 方法[3]將系統重構爲在多個尺度上的子問題,其中每個子問題負責較粗尺度和較細尺度的殘差解。Multigrid 的替代方法是層次化基礎預處理[44,45],它依賴於表示兩個尺度之間殘差向量的變量。已經被證明[3,44,45]這些求解器比不知道解的殘差性質的標準求解器收斂得更快。這些方法表明,良好的重構或預處理可以簡化優化

Shortcut Connections. Practices and theories that lead to shortcut connections [2, 34, 49] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [34, 49]. In [44, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [39, 38, 31, 47] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [44], an “inception” layer is composed of a shortcut branch and a few deeper branches.

快捷連接。導致快捷連接[2,33,48]的實踐和理論已經被研究了很長時間。訓練多層感知機(MLP)的早期實踐是添加一個線性層來連接網絡的輸入和輸出[33,48]。在[43,24]中,一些中間層直接連接到輔助分類器,用於解決梯度消失/爆炸。論文[38,37,31,46]提出了通過快捷連接實現層間響應,梯度和傳播誤差的方法。在[43]中,一個“inception”層由一個快捷分支和一些更深的分支組成。

Concurrent with our work, “highway networks” [42, 43] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

和我們同時進行的工作,“highway networks” [41, 42]提出了門功能[15]的快捷連接。這些門是數據相關且有參數的,與我們不具有參數恆等快捷連接相反。當門控快捷連接“關閉”(接近零)時,高速網絡中的層表示非殘差函數。相反,我們的公式總是學習殘差函數;我們的恆等快捷連接永遠不會關閉,所有的信息總是通過,還有額外的殘差函數要學習。此外,高速網絡還沒有證實極度增加的深度(例如,超過 100 個層)帶來的準確性收益。

@3 作者指出他並不是殘差思想的第一個提出者,不過作者將其很好地運用起來了。

3. Deep Residual Learning

3. 深度殘差學習

3.1. Residual Learning

3.1. 殘差學習

Let us consider H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H(x) − x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function F(x) := H(x) − x. The original function thus becomes F(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

我們考慮 H(x) 作爲幾個堆疊層(不必是整個網絡)要擬合的基礎映射,x 表示這些層中第一層的輸入。假設多個非線性層可以漸近地近似複雜函數,它等價於假設它們可以漸近地近似殘差函數,即 H(x)−x (假設輸入輸出是相同維度)。因此,我們明確讓這些層近似參數函數 F(x):=H(x)−x,而不是期望堆疊層近似 H(x)。因此原始函數變爲 F(x)+x。儘管兩種形式應該都能漸近地近似要求的函數(如假設),但學習的難易程度可能是不同的。

This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

關於退化問題的反直覺現象激發了這種重構(圖 1 左)。正如我們在引言中討論的那樣,如果添加的層可以被構建爲恆等映射,更深模型的訓練誤差應該不大於它對應的更淺版本。退化問題表明求解器通過多個非線性層來近似恆等映射可能有困難。通過殘差學習的重構,如果恆等映射是最優的,求解器可能簡單地將多個非線性連接的權重推向零來接近恆等映射

In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity map-pings provide reasonable preconditioning.

Figure 7. Standard deviations (std) of layer responses on CIFAR-10. The responses are the outputs of each 3×3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.

在實際情況下,恆等映射不太可能是最優的,但是我們的重構可能有助於對問題進行預處理。如果最優函數比零映射更接近於恆等映射,則求解器應該更容易找到關於恆等映射的抖動,而不是將該函數作爲新函數來學習。我們通過實驗(圖 7)顯示學習的殘差函數通常有更小的響應,表明恆等映射提供了合理的預處理

圖 7. 層響應在 CIFAR-10 上的標準差(std)。這些響應是每個 3×3 層的輸出,在 1BN1 之後非線性之前。上面:以原始順序顯示層。下面:響應按降序排列。

3.2. Identity Mapping by Shortcuts

3.2. 快捷恆等映射

We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:

y = F (x, {Wi }) + x. (1)

Here x and y are the input and output vectors of the layers considered. The function F(x, {W_i}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F = W_2 \sigma(W_1x) in which σ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation F + x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., \sigma(y), see Fig. 2).

我們每隔幾個堆疊層採用殘差學習。構建塊如圖 2 所示。在本文中我們考慮構建塊正式定義爲:

y = F (x, {Wi }) + x. (1)

xy 是考慮的層的輸入和輸出向量。函數 F(x, {W_i}) 表示要學習的殘差映射。圖 2 中的例子有兩層,F = W_2 \sigma(W_1x)σ 表示 ReLU[29],爲了簡化寫法忽略偏置項F + x 操作通過快捷連接和各個元素相加來執行。在相加之後我們採納了第二種非線性(即 \sigma(y),看圖 2)。

The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

方程(1)中的快捷連接既沒有引入外部參數又沒有增加計算複雜度。這不僅在實踐中有吸引力,而且在簡單網絡和殘差網絡的比較中也很重要。我們可以公平地比較同時具有相同數量的參數,相同深度,寬度和計算成本的簡單/殘差網絡(除了不可忽略的元素加法之外)。

The dimensions of x and F must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:

y = F(x, {W_i }) + W_sx. {2}

We can also use a square matrix Ws in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.

方程(1)中 xF維度必須是相等的。如果不是這種情況(例如,當更改輸入/輸出通道時),我們可以通過快捷連接執行線性投影 Ws 來匹配維度:

y = F(x, {W_i }) + W_sx. {2}

我們也可以使用方程(1)中的方陣 Ws。但是我們將通過實驗表明,恆等映射足以解決退化問題,並且是合算的,因此 Ws 僅在匹配維度時使用

The form of the residual function F is flexible. Experiments in this paper involve a function F that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.(1) is similar to a linear layer: y = W_1x + x, for which we have not observed advantages.

Figure 5. A deeper residual function F for ImageNet. Left: a building block (on 56×56 feature maps) as in Fig. 3 for ResNet-34. Right: a “bottleneck” building block for ResNet-50/101/152.

圖 5. ImageNet 的深度殘差函數 F。左:ResNet-34 的構建塊(在 56×56 的特徵圖上),如圖 3。右:ResNet-50/101/152 的 “bottleneck”構建塊。

殘差函數 F 的形式是可變的。本文中的實驗包括有兩層三層(圖 5)的函數 F,同時可能有更多的層。但如果 F 只有一層,方程(1)類似於線性層y = W_1x + x,我們沒有看到優勢

We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function F(x,{W_i}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

我們還注意到,爲了簡單起見,儘管上述符號是關於全連接層的,但它們同樣適用於卷積層。函數 F(x,{W_i}) 可以表示多個卷積層。元素加法在兩個特徵圖上逐通道進行。

3.3. Network Architectures

3.3. 網絡架構

We have tested various plain/residual nets, and have ob-served consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.

我們測試了各種簡單/殘差網絡,並觀察到了一致的現象。爲了提供討論的實例,我們描述了 ImageNet 的兩個模型如下。

Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).

Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model [41] (19.6 billion FLOPs) as a reference. Middle: a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.

Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Down-sampling is performed by conv3_1, conv4_1, and conv5_1 with a stride of 2.

簡單網絡。 我們簡單網絡的基準(圖 3,中間)主要受到 VGG 網絡[40](圖 3,左圖)的哲學啓發。卷積層主要有 3×3 的濾波器,並遵循兩個簡單的設計規則:(i)對於相同的輸出特徵圖尺寸,層具有相同數量的濾波器;(ii)如果特徵圖尺寸減半,則濾波器數量加倍,以便保持每層的時間複雜度。我們通過步長2 的卷積層直接執行下采樣。網絡以全局平均池化層和具有 softmax1000全連接層結束。圖 3(中間)的加權層總數爲 34

圖 3. ImageNet 的網絡架構例子。左:作爲參考的 VGG-19 模型[41]。中:具有 34 個參數層的簡單網絡(36FLOPs)。右:具有 34 個參數層的殘差網絡(36FLOPs)。帶點的快捷連接增加了維度。表 1 顯示了更多細節和其它變種。

表 1. ImageNet 架構。構建塊顯示在括號中(也可看圖 5),以及構建塊的堆疊數量。下采樣通過步長爲 2conv3_1, conv4_1conv5_1 執行。

It is worth noticing that our model has fewer filters and lower complexity than VGG nets [41] (Fig. 3, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).

值得注意的是我們的模型與 VGG 網絡(圖 3 左)相比,有更少的濾波器更低的複雜度。我們的 34 層基準有 36FLOP(乘加),僅是 VGG-19196FLOP)的 18%

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

殘差網絡。基於上述的簡單網絡,我們插入快捷連接(圖 3,右),將網絡轉換爲其對應的殘差版本。當輸入和輸出具有相同的維度時(圖 3 中的實線快捷連接)時,可以直接使用恆等快捷連接(方程(1))。當維度增加(圖 3 中的虛線快捷連接)時,我們考慮兩個選項:(A)快捷連接仍然執行恆等映射,額外填充零輸入以增加維度。此選項不會引入額外的參數;(B)方程(2)中的投影快捷連接用於匹配維度(由 1×1 卷積完成)。對於這兩個選項,當快捷連接跨越兩種尺寸的特徵圖時,它們執行時步長爲 2

3.4. Implementation

3.4. 實現

Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [41]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60 × 104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].

ImageNet 中我們的實現遵循[21,41]的實踐。調整圖像大小,其較短的邊在[256,480]之間進行隨機採樣用於尺度增強[40]。224×224 裁剪是從圖像或其水平翻轉中隨機採樣,並逐像素減去均值[21]。使用了[21]中的標準顏色增強。在每個卷積之後和激活之前,我們採用批量歸一化(BN)[16]。我們按照[12]的方法初始化權重,從零開始訓練所有的簡單/殘差網絡。我們使用批大小256SGD 方法。學習速度從 0.1 開始,當誤差穩定時學習率除以 10,並且模型訓練高達 60×104 次迭代。我們使用的權重衰減爲 0.0001,動量爲 0.9。根據[16]的實踐,我們不使用 dropout[14]。

In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fully-convolutional form as in [41, 13], and average the scores at multiple scales (images are resized such that the shorter side is in {224, 256, 384, 480, 640}).

在測試階段,爲了比較學習我們採用標準的 10-crop 測試[21]。對於最好的結果,我們採用如[41, 13]中的全卷積形式,並在多尺度上對分數進行平均(圖像歸一化,短邊位於 {224, 256, 384, 480, 640} 中)。

@3 作者在本節先講了殘差網絡更好的理論依據原始函數和殘差函數學習的難易程度是不同的
然後說明了殘差函數的形式是可變的,文中使用的是兩層三層的,並且不採用一層的(類似於線性層);
緊接着通過對比 VGG34-layer plain34-layer residual講解了網絡的結構
最後講解了網絡的實現細節

4. Experiments

4. 實驗

4.1. ImageNet Classification

4.1. ImageNet 分類

We evaluate our method on the ImageNet 2012 classification dataset [36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.

我們在 ImageNet 2012 分類數據集[36]對我們的方法進行了評估,該數據集由 1000 個類別組成。這些模型在 128 萬張訓練圖像上進行訓練,並在 5 萬張驗證圖像上進行評估。我們也獲得了測試服務器報告的在 10 萬張測試圖像上的最終結果。我們評估了 top-1top-5 錯誤率。

Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures.

簡單網絡。我們首先評估 18 層和 34 層的簡單網絡。34 層簡單網絡在圖 3(中間)。18 層簡單網絡是一種類似的形式。有關詳細的體系結構,請參見表 1。

The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem -- the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.

Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.

表 2 中的結果表明,較深的 34 層簡單網絡比較淺的 18 層簡單網絡有更高的驗證誤差。爲了揭示原因,在圖 4(左圖)中,我們比較訓練過程中的訓練/驗證誤差。我們觀察到退化問題——雖然 18 層簡單網絡的解空間是 34 層簡單網絡解空間的子空間,但 34 層簡單網絡在整個訓練過程中具有較高的訓練誤差

表 2. ImageNet 驗證集上的 Top-1 錯誤率(%,10 個裁剪圖像測試)。相比於對應的簡單網絡,ResNet 沒有額外的參數。圖 4 顯示了訓練過程。

Figure 4. Training on ImageNet. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.

圖 4. 在 ImageNet 上訓練。細曲線表示訓練誤差,粗曲線表示中心裁剪圖像的驗證誤差。左:18 層和 34 層的簡單網絡。右:18 層和 34 層的 ResNet。在本圖中,殘差網絡與對應的簡單網絡相比沒有額外的參數

We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error3. The reason for such optimization difficulties will be studied in the future.

Table 3. Error rates (%, 10-crop testing) on ImageNet validation. VGG-16 is based on our test. ResNet-50/101/152 are of option B that only uses projections for increasing dimensions.

我們認爲這種優化難度不可能是由於梯度消失引起的。這些簡單網絡使用 BN[16]訓練,這保證了前向傳播信號有非零方差。我們還驗證了反向傳播的梯度,結果顯示其符合 BN 的正常標準。因此既不是前向信號消失也不是反向信號消失。實際上,34 層簡單網絡仍能取得有競爭力的準確率(表 3),這表明在某種程度上來說求解器仍工作。我們推測深度簡單網絡可能有指數級低收斂特性,這影響了訓練誤差的降低。這種優化困難的原因將來會研究。

表 3. ImageNet 驗證集錯誤率(%,10 個裁剪圖像測試)。VGG16 是基於我們的測試結果的。ResNet-50/101/152 的選擇 B 僅使用投影增加維度。

Residual Networks. Next we evaluate 18-layer and 34-layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.

殘差網絡。接下來我們評估 18 層和 34 層殘差網絡(ResNets)。基準架構與上述的簡單網絡相同,如圖 3(右)所示,預計每對 3×3 濾波器都會添加快捷連接。在第一次比較(表 2 和圖 4 右側)中,我們對所有快捷連接都使用恆等映射零填充以增加維度(選項 A)。所以與對應的簡單網絡相比,它們沒有額外的參數

We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.

我們從表 2 和圖 4 中可以看到三個主要的觀察結果。首先,殘留學習的情況變了——34 層 ResNet 比 18 層 ResNet 更好(2.8%)。更重要的是,34 層 ResNet 顯示出較低的訓練誤差,並且可以泛化到驗證數據。這表明在這種情況下,退化問題得到了很好的解決,我們從增加的深度中設法獲得了準確性收益。

Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems.

第二,與對應的簡單網絡相比,由於成功的減少了訓練誤差,34 層 ResNet 降低了 3.5%的 top-1 錯誤率。這種比較證實了在極深系統中殘差學習的有效性

Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.

最後,我們還注意到 18 層的簡單/殘差網絡同樣地準確(表 2),但 18ResNet 收斂更快(圖 4 右和左)。當網絡“不過度深”時(18 層),目前的 SGD 求解器仍能在簡單網絡中找到好的解。在這種情況下,ResNet 通過在早期提供更快的收斂簡便了優化。

Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameter-free (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections.

恆等和投影快捷連接我們已經表明沒有參數恆等快捷連接有助於訓練。接下來我們調查投影快捷連接(方程 2)。在表 3 中我們比較了三個選項:(A) 零填充快捷連接用來增加維度,所有的快捷連接是沒有參數的(與表 2 和圖 4 右相同);(B)投影快捷連接用來增加維度,其它的快捷連接是恆等的;(C)所有的快捷連接都是投影

Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.

表 3 顯示,所有三個選項都比對應的簡單網絡好很多。選項 BA 略好。我們認爲這是因爲 A 中的零填充確實沒有殘差學習。選項 CB 稍好,我們把這歸因於許多(十三)投影快捷連接引入了額外參數。但 A/B/C 之間的細微差異表明,投影快捷連接對於解決退化問題不是至關重要的。因此我們在本文的剩餘部分不再使用選項 C,以減少內存/時間複雜性模型大小恆等快捷連接對於不增加下面介紹的瓶頸結構的複雜性尤爲重要。

Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design. For each residual function F , we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.

更深的瓶頸結構。接下來我們描述 ImageNet 中我們使用的更深的網絡網絡。由於關注我們能承受的訓練時間,我們將構建塊修改爲瓶頸設計。對於每個殘差函數 F,我們使用 3 層堆疊而不是 2 層(圖 5)。三層是 1×13×31×1 卷積,其中 1×1 層負責減小然後增加(恢復)維度,使 3×3 層成爲具有較小輸入/輸出維度的瓶頸。圖 5 展示了一個示例,兩個設計具有相似的時間複雜度。

The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity short-cut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.

無參數恆等快捷連接對於瓶頸架構尤爲重要。如果圖 5(右)中的恆等快捷連接被投影替換,則可以顯示出時間複雜度和模型大小加倍,因爲快捷連接是連接到兩個高維端。因此,恆等快捷連接可以爲瓶頸設計得到更有效的模型

50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.

50 層 ResNet:我們用 3 層瓶頸塊替換 34 層網絡中的每一個 2 層塊,得到了一個 50ResNet(表 1)。我們使用選項 B 來增加維度。該模型有 38FLOP

101-layer and 152-layer ResNet: We construct 101-layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs).

101 層和 152 層 ResNet:我們通過使用更多的 3 層瓶頸塊來構建 101 層和 152ResNets(表 1)。值得注意的是,儘管深度顯著增加,但 152ResNet(113 億 FLOP)仍然比 VGG-16/19 網絡(153/196 億 FLOP)具有更低的複雜度

The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 4).

50/101/152ResNet34ResNet準確性要高得多(表 3 和 4)。我們沒有觀察到退化問題,因此可以從顯著增加的深度中獲得顯著的準確性收益。所有評估指標都能證明深度的收益(表 3 和表 4)。

Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015.

Table 4. Error rates (%) of single-model results on the ImageNet validation set (except reported on the test set).

Table 5. Error rates (%) of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server.

與最先進的方法比較。在表 4 中,我們與以前最好的單一模型結果進行比較。我們基準的 34ResNet 已經取得了非常有競爭力的準確性。我們的 152ResNet 具有單模型 4.49%top-5 錯誤率。這種單一模型的結果勝過以前的所有綜合結果(表 5)。我們結合了六種不同深度的模型,形成一個集合(在提交時僅有兩個 152 層)。這在測試集上得到了 3.5%top-5 錯誤率(表 5)。這次提交在 2015ILSVRC 中榮獲了第一名。

表 4. 單一模型在 ImageNet 驗證集上的錯誤率(%)(除了†是測試集上報告的錯誤率)。

表 5. 模型綜合的錯誤率(%)。top-5 錯誤率是 ImageNet 測試集上的並由測試服務器報告的。

4.2. CIFAR-10 and Analysis

4.2. CIFAR-10 和分析

We conducted more studies on the CIFAR-10 dataset [20], which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.

我們對 CIFAR-10 數據集[20]進行了更多的研究,其中包括 10 個類別中的 5 萬張訓練圖像和 1 萬張測試圖像。我們介紹了在訓練集上進行訓練和在測試集上進行評估的實驗。我們的焦點在於極深網絡的行爲,但不是推動最先進的結果,所以我們有意使用如下的簡單架構。

The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×32 images, with the per-pixel mean subtracted. The first layer is 3×3 convolutions. Then we use a stack of 6n layers with 3×3 convolutions on the feature maps of sizes {32, 16, 8} respectively, with 2n layers for each feature map size. The numbers of filters are {16, 32, 64} respectively. The subsampling is per-formed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:

When shortcut connections are used, they are connected to the pairs of 3×3 layers (totally 3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A), so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.

簡單/殘差架構遵循圖 3(中/右)的形式。網絡輸入是 32×32 的圖像,每個像素減去均值。第一層是 3×3 卷積。然後我們在大小爲 {32,16,8} 的特徵圖上分別使用了帶有 3×3 卷積的 6n 個堆疊層,每個特徵圖大小使用 2n 層。濾波器數量分別爲 {16,32,64}。下采樣由步長爲 2 的卷積進行。網絡以全局平均池化,一個 10 維全連接層和 softmax 作爲結束。共有 6n+2 個堆疊的加權層。下表總結了這個架構:

當使用快捷連接時,它們連接到成對的 3×3 卷積層上(共 3n 個快捷連接)。在這個數據集上,我們在所有案例中都使用恆等快捷連接(即選項 A),因此我們的殘差模型與對應的簡單模型具有完全相同的深度,寬度和參數數量

We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [13] and BN [16] but with no dropout. These models are trained with a mini-batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a 32×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image.

我們使用的權重衰減爲 0.0001 和動量爲 0.9,並採用[12]和 BN[16]中的權重初始化,但沒有使用丟棄。這些模型在兩個 GPU 上進行訓練,批處理大小爲 128。我們開始使用的學習率爲 0.1,在 32k 次和 48k 次迭代後學習率除以 10,並在 64k 次迭代後終止訓練,這是由 45k/5k 的訓練/驗證集分割決定的。我們按照[24]中的簡單數據增強進行訓練:每邊填充 4 個像素,並從填充圖像或其水平翻轉圖像中隨機採樣 32×32 的裁剪圖像。對於測試,我們只評估原始 32×32 圖像的單一視圖。

We compare n = {3, 5, 7, 9}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [42]), suggesting that such an optimization difficulty is a fundamental problem.

Figure 6. Training on CIFAR-10. Dashed lines denote training error, and bold lines denote testing error. Left: plain networks. The error of plain-110 is higher than 60% and not displayed. Middle: ResNets. Right: ResNets with 110 and 1202 layers.

我們比較了 n = 3,5,7,9,得到了 20 層,32 層,44 層和 56 層的網絡。圖 6(左)顯示了簡單網絡的行爲。深度簡單網絡經歷了深度增加,隨着深度增加表現出了更高的訓練誤差。這種現象類似於 ImageNet 中(圖 4,左)和 MNIST 中(請看[41])的現象,表明這種優化困難是一個基本的問題。

圖 6. 在 CIFAR-10 上訓練。虛線表示訓練誤差,粗線表示測試誤差。左:簡單網絡。簡單的 110 層網絡錯誤率超過 60% 沒有展示。中間:ResNet。右:110ResNet1202ResNet

Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases.

圖 6(中)顯示了 ResNet 的行爲。與 ImageNet 的情況類似(圖 4,右),我們的 ResNet 設法克服優化困難並隨着深度的增加展示了準確性收益。

We further explore n = 18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging5. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet [35] and Highway [42] (Table 6), yet is among the state-of-the-art results (6.43%, Table 6).

Table 6. Classification error on the CIFAR-10 test set. All methods are with data augmentation. For ResNet-110, we run it 5 times and show “best (mean±std)” as in [43].

我們進一步探索了 n=18 得到了 110 層的 ResNet。在這種情況下,我們發現 0.1 的初始學習率對於收斂來說太大了。因此我們使用 0.01 的學習率開始訓練,直到訓練誤差低於 80%(大約 400 次迭代),然後學習率變回到 0.1 並繼續訓練。學習過程的剩餘部分與前面做的一樣。這個 110 層網絡收斂的很好(圖 6,中)。它與其它的深且窄的網絡例如 FitNet[34]和 Highway41 相比有更少的參數,但結果仍在目前最好的結果之間(6.43%,表 6)。

表 6. 在 CIFAR-10 測試集上的分類誤差。所有的方法都使用了數據增強。對於 ResNet-110,像論文[42]中那樣,我們運行了 5 次並展示了“最好的(mean±std)”。

Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less.

層響應分析。圖 7 顯示了層響應的標準偏差(std)。這些響應每個 3×3 層的輸出,在 BN 之後和其他非線性(ReLU/加法)之前。對於 ResNets,該分析揭示了殘差函數的響應強度。圖 7 顯示 ResNet響應比其對應的簡單網絡的響應更小。這些結果支持了我們的基本動機(第 3.1 節),殘差函數通常具有比非殘差函數更接近零。我們還注意到,更深的 ResNet 具有較小的響應幅度,如圖 7ResNet-2056110 之間的比較所證明的。當層數更多時,單層 ResNet 趨向於更少地修改信號

Figure 7. Standard deviations (std) of layer responses on CIFAR-10. The responses are the outputs of each 3×3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.

圖 7. 層響應在 CIFAR-10 上的標準差(std)。這些響應是每個 3×3 層的輸出,在 1BN1 之後非線性之前。上面:以原始順序顯示層。下面:響應按降序排列。

Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 103-layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6).

探索超過 1000 層。我們探索超過 1000 層的過深的模型。我們設置 n=200,得到了 1202 層的網絡,其訓練如上所述。我們的方法顯示沒有優化困難,這個 103 層網絡能夠實現訓練誤差 <0.1%(圖 6,右圖)。其測試誤差仍然很好(7.93%,表 6)。

But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout [10] or dropout [14] is applied to obtain the best results ([10, 25, 24, 35]) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future.

但是,這種極深的模型仍然存在着開放的問題。這個 1202 層網絡的測試結果比我們的 110 層網絡的測試結果更差,雖然兩者都具有類似的訓練誤差。我們認爲這是因爲過擬合。對於這種小型數據集,1202 層網絡可能是不必要的大(19.4M)。在這個數據集應用強大的正則化,如 maxout[9]或者 dropout[13]來獲得最佳結果([10,25,24,35])。在本文中,我們不使用 maxout/dropout,只是簡單地通過設計深且窄的架構簡單地進行正則化,而不會分散集中在優化難點上的注意力。但結合更強的正規化可能會改善結果,我們將來會研究。

4.3. Object Detection on PASCAL and MS COCO

4.3. 在 PASCAL 和 MS COCO 上的目標檢測

Our method has good generalization performance on other recognition tasks. Table 7 and 8 show the object detection baseline results on PASCAL VOC 2007 and 2012 [5] and COCO [26]. We adopt Faster R-CNN [32] as the detection method. Here we are interested in the improvements of replacing VGG-16 [41] with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@[.5, .95]), which is a 28% relative improvement. This gain is solely due to the learned representations.

Table 7. Object detection mAP (%) on the PASCAL VOC 2007/2012 test sets using baseline Faster R-CNN. See also Table 10 and 11 for better results.

Table 8. Object detection mAP (%) on the COCO validation set using baseline Faster R-CNN. See also Table 9 for better results.

我們的方法對其他識別任務有很好的泛化性能。表 7 和表 8 顯示了 PASCAL VOC 20072012[5]以及 COCO[26]的目標檢測基準結果。我們採用Faster R-CNN[32]作爲檢測方法。在這裏,我們感興趣的是用 ResNet-101 替換 VGG-16[40]。使用這兩種模式的檢測實現(見附錄)是一樣的,所以收益只能歸因於更好的網絡。最顯著的是,在有挑戰性的 COCO 數據集中,COCO 的標準度量指標(mAP@[.5,.95])增長了 6.0%,相對改善了 28%。這種收益完全是由於學習表示。

表 7. 在 PASCAL VOC 2007/2012 測試集上使用基準 Faster R-CNN 的目標檢測 mAP(%)。更好的結果請看附錄。

表 8. 在 COCO 驗證集上使用基準 Faster R-CNN 的目標檢測 mAP(%)。更好的結果請看附錄。

Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.

基於深度殘差網絡,我們在 ILSVRC & COCO 2015 競賽的幾個任務中獲得了第一名,分別是:ImageNet 檢測,ImageNet 定位,COCO 檢測,COCO 分割。跟多細節請看附錄。

$
3 作者通過實驗ImageNet 分類CIFAR-10 和分析證實了

  • 深度簡單網絡可能有指數級低收斂特性
  • 投影快捷連接對於解決退化問題不是至關重要的
  • ResNet響應比其對應的簡單網絡的響應更小,即殘差函數通常具有比非殘差函數更接近零

同時也探索超過 1000 層的網絡,並指出這種極深的模型仍然存在着開放的問題
總得來說,Resnet 在各個數據集各項任務中表現都較好,可以用於替換舊的一些 backbone
$

References

  • [1] Y.Bengio,P.Simard, and P.Frasconi.Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.
  • [2] C. M. Bishop. Neural networks for pattern recognition. Oxford university press, 1995.
  • [3] W. L. Briggs, S. F. McCormick, et al. A Multigrid Tutorial. Siam, 2000.
  • [4] K.Chatfield,V.Lempitsky,A.Vedaldi,and A.Zisserman.Thedevil is in the details: an evaluation of recent feature encoding methods. In BMVC, 2011.
  • [5] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The Pascal Visual Object Classes (VOC) Challenge. IJCV, pages 303–338, 2010.
  • [6] S.GidarisandN.Komodakis.Object detection via a multi-region & semantic segmentation-aware cnn model. In ICCV, 2015.
  • [7] R. Girshick. Fast R-CNN. In ICCV, 2015.
  • [8] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In
    CVPR, 2014.
  • [9] X. Glorot and Y. Bengio. Understanding the difficulty of training
    deep feedforward neural networks. In AISTATS, 2010.
  • [10] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and
    Y. Bengio. Maxout networks. arXiv:1302.4389, 2013.
  • [11] K.HeandJ.Sun.Convolutional neural networks at constrained time cost. In CVPR, 2015.
  • [12] K.He,X.Zhang,S.Ren,andJ.Sun.Spatial pyramid pooling in deep
    convolutional networks for visual recognition. In ECCV, 2014.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In
    ICCV, 2015.
  • [14] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012.
  • [15] S.HochreiterandJ.Schmidhuber.Long short-term memory.Neural computation, 9(8):1735–1780, 1997.
  • [16] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep
    network training by reducing internal covariate shift. In ICML, 2015.
  • [17] H.Jegou,M.Douze,andC.Schmid.Product quantization for nearest neighbor search. TPAMI, 33, 2011.
  • [18] H. Jegou, F. Perronnin, M. Douze, J. Sanchez, P. Perez, and C. Schmid. Aggregating local image descriptors into compact codes. TPAMI, 2012.
  • [19] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick,
    S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for
    fast feature embedding. arXiv:1408.5093, 2014.
  • [20] A. Krizhevsky. Learning multiple layers of features from tiny im-
    ages. Tech Report, 2009.
  • [21] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification
    with deep convolutional neural networks. In NIPS, 2012.
  • [22] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to hand-
    written zip code recognition. Neural computation, 1989.
  • [23] Y.LeCun,L.Bottou,G.B.Orr,andK.-R.Mu ̈ller.Efficientbackprop. In Neural Networks: Tricks of the Trade, pages 9–50. Springer, 1998.
  • [24] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-
    supervised nets. arXiv:1409.5185, 2014.
  • [25] M.Lin,Q.Chen,andS.Yan.Network in network.arXiv:1312.4400, 2013.
  • [26] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
    P. Dolla ́r, and C. L. Zitnick. Microsoft COCO: Common objects in
    context. In ECCV. 2014.
  • [27] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks
    for semantic segmentation. In CVPR, 2015.
  • [28] G. Montu ́far, R. Pascanu, K. Cho, and Y. Bengio. On the number of linear regions of deep neural networks. In NIPS, 2014.
  • [29] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010.
  • [30] F.Perronnin and C.Dance.Fisher kernels on visual vocabularies for image categorization. In CVPR, 2007.
  • [31] T. Raiko, H. Valpola, and Y. LeCun. Deep learning made easier by linear transformations in perceptrons. In AISTATS, 2012.
  • [32] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.
  • [33] S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun. Object detection networks on convolutional feature maps. arXiv:1504.06066, 2015.
  • [34] B. D. Ripley. Pattern recognition and neural networks. Cambridge university press, 1996.
  • [35] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015.
  • [36] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. arXiv:1409.0575, 2014.
  • [37] A. M. Saxe, J. L. McClelland, and S. Ganguli.
    Exact solutions to the nonlinear dynamics of learning in deep linear neural networks.
    arXiv:1312.6120, 2013.
  • [38] N.N.Schraudolph.Accelerated gradient descent by factor-centering decomposition. Technical report, 1998.
  • [39] N. N. Schraudolph. Centering neural network gradient factors. In Neural Networks: Tricks of the Trade, pages 207–226. Springer, 1998.
  • [40] P.Sermanet, D.Eigen, X.Zhang, M.Mathieu, R.Fergus, and Y.LeCun. Overfeat: Integrated recognition,localization and detection using convolutional networks. In ICLR, 2014.
  • [41] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [42] R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. arXiv:1505.00387, 2015.
  • [43] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. 1507.06228, 2015.
  • [44] C.Szegedy, W.Liu, Y.Jia, P.Sermanet, S.Reed, D.Anguelov, D.Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
  • [45] R. Szeliski. Fast surface interpolation using hierarchical basis functions. TPAMI, 1990.
  • [46] R. Szeliski. Locally adapted hierarchical basis preconditioning. In SIGGRAPH, 2006.
  • [47] T. Vatanen, T. Raiko, H. Valpola, and Y. LeCun. Pushing stochastic gradient towards second-order methods–backpropagation learning with transformations in nonlinearities. In Neural Information Processing, 2013.
  • [48] A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms, 2008.
  • [49] W. Venables and B. Ripley. Modern applied statistics with s-plus. 1999.
  • [50] M.D.ZeilerandR.Fergus.Visualizing and understanding convolutional neural networks. In ECCV, 2014.
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章