DenseNet源碼閱讀筆記(gluoncv版本)

DenseNet閱讀筆記(gluoncv版本)

這兩天在看gluoncv版本的Densenet 做點小分析 寫點筆記記錄下

這個版本的Densenet和論文中的大體結構是一致的, 但是有些小點不大一樣.(看代碼和表格進行對比發現的)

先貼一張圖說明下DenseNet的網絡結構:
DenseNet的網絡結構

再貼一張圖說明下DenseNet中最具個性的結構:
DenseBlock

在此先說一下幾個要點

  1. 看第一張圖 從第4行的"Dense Block(1)"開始, 都是Dense Block與Transition Layer交替組成, 因此可以將Dense Block與Transition Layer寫成兩個基本模塊, 用循環拼湊起來即可. 與之對應的代碼:
# Dense Block部分
def _make_dense_layer(growth_rate, bn_size, dropout, norm_layer, norm_kwargs):
    '''
        2 * [BN - relu - Conv]
        growth_rate 示每個dense block中每層輸出的feature map個數
    '''
    new_features = nn.HybridSequential(prefix='')
    # ----------------------
    new_features.add(norm_layer(**({} if norm_kwargs is None else norm_kwargs))) # 不一定有這一層 看具體傳入參數
    new_features.add(nn.Activation('relu'))
    new_features.add(nn.Conv2D(bn_size * growth_rate, kernel_size=1, use_bias=False)) # size: 1*1
    # ----------------------
    new_features.add(norm_layer(**({} if norm_kwargs is None else norm_kwargs))) # 不一定有這一層 看具體傳入參數
    new_features.add(nn.Activation('relu'))
    new_features.add(nn.Conv2D(growth_rate, kernel_size=3, padding=1, use_bias=False)) # size 3*3
    # ----------------------
    if dropout:
        new_features.add(nn.Dropout(dropout))

    out = HybridConcurrent(axis=1, prefix='')
    out.add(Identity())
    out.add(new_features)

    return out

# Transition Layer部分
def _make_transition(num_output_features, norm_layer, norm_kwargs):
    '''
        BN -> Relu -> Conv -> AvgPool
    '''
    out = nn.HybridSequential(prefix='')
    out.add(norm_layer(**({} if norm_kwargs is None else norm_kwargs))) # 不一定有這一層 看具體傳入參數
    out.add(nn.Activation('relu'))
    out.add(nn.Conv2D(num_output_features, kernel_size=1, use_bias=False))  # Size: 1*1
    out.add(nn.AvgPool2D(pool_size=2, strides=2))
    return out
  1. 在第二張圖中的每一個Dense Block中, 最明顯的特徵就是每個節點與之後的每個節點都有一個concat連接(如下圖公式所示), 這也是Denset的魅力所在
    Denset公式
    但是看函數_make_dense_layer似乎沒有體現這一點…但實際上這一特點寫在了該函數末尾幾行:
   out = HybridConcurrent(axis=1, prefix='')
   out.add(Identity())
   out.add(new_features)

來看看官網對HybridConcurrent的說明: “This block feeds its input to all children blocks, and produce the output by concatenating all the children blocks’ outputs on the specified axis.”

看看該函數的源碼:

class HybridConcurrent(HybridSequential):
   """Lays `HybridBlock` s concurrently.

   This block feeds its input to all children blocks, and
   produce the output by concatenating all the children blocks' outputs
   on the specified axis.

   Example::

       net = HybridConcurrent()
       # use net's name_scope to give children blocks appropriate names.
       with net.name_scope():
           net.add(nn.Dense(10, activation='relu'))
           net.add(nn.Dense(20))
           net.add(Identity())

   Parameters
   ----------
   axis : int, default -1
       The axis on which to concatenate the outputs.
   """
   def __init__(self, axis=-1, prefix=None, params=None):
       super(HybridConcurrent, self).__init__(prefix=prefix, params=params)
       self.axis = axis

   def hybrid_forward(self, F, x):
       out = []
       for block in self._children.values():
           out.append(block(x))
       out = F.concat(*out, dim=self.axis)
       return out

大概意思就是把每一個輸入在指定維度下進行Concat 嗯, 就是Dense Block裏的concat操作, 這裏我覺得就是最最最重要的地方了 (至於Identity() 常常與 HybridConcurrent 配合使用, 起啥作用我沒搞懂…有沒有大佬指點一下)

我覺得以上2點搞懂了 其他代碼就沒啥問題了 初步的理解DenseNet只要把DenseNet拆成兩個模塊就Ok了 那接下來看看其他組成部分

densenet函數分析

我們在上面講了兩個函數: _make_dense_block _make_transition, 這裏還有一個 _make_dense_layer, 這個函數實際上就是對_make_dense_block進一步調用封裝, 用out.name_scope()對每一個層設置一個名稱前綴而已, print查看方便:

def _make_dense_block(num_layers, bn_size, growth_rate, dropout, stage_index,
                      norm_layer, norm_kwargs):
    out = nn.HybridSequential(prefix='stage%d_'%stage_index)
    with out.name_scope():
        for _ in range(num_layers):
            out.add(_make_dense_layer(growth_rate, bn_size, dropout, norm_layer, norm_kwargs))
    return out

在之後的網絡創建都會調用這個函數

現在來看看類DenseNet 該類組成最最最基本的DenseNet的結構 把表格中的各個結構拼湊在一起 然後被函數get_densenet進行調用 以及設置一些細節(比如cpu或者gpu 選擇121/161/169/201等哪個版本啊啥啥的)

# Net
class DenseNet(HybridBlock):
    r"""Densenet-BC model from the
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_ paper.

    Parameters
    ----------
    num_init_features : int
        Number of filters to learn in the first convolution layer.
    growth_rate : int
        Number of filters to add each layer (`k` in the paper). 每一層feature map的輸出數量
    block_config : list of int
        List of integers for numbers of layers in each pooling block.
    bn_size : int, default 4
        Multiplicative factor for number of bottle neck layers.
        (i.e. bn_size * k features in the bottleneck layer)
    dropout : float, default 0
        Rate of dropout after each dense layer.
    classes : int, default 1000
        Number of classification classes.
    norm_layer : object
        Normalization layer used (default: :class:`mxnet.gluon.nn.BatchNorm`)
        Can be :class:`mxnet.gluon.nn.BatchNorm` or :class:`mxnet.gluon.contrib.nn.SyncBatchNorm`.
    norm_kwargs : dict
        Additional `norm_layer` arguments, for example `num_devices=4`
        for :class:`mxnet.gluon.contrib.nn.SyncBatchNorm`.
    """
    def __init__(self, num_init_features, growth_rate, block_config,
                 bn_size=4, dropout=0, classes=1000,
                 norm_layer=BatchNorm, norm_kwargs=None, **kwargs):
        super(DenseNet, self).__init__(**kwargs)
        with self.name_scope():
            # 先是頭兩層的7*7的Conv + 3*3的Max pool
            self.features = nn.HybridSequential(prefix='') # prefix是設置HybridSequential中每一層的名稱前綴
            self.features.add(nn.Conv2D(num_init_features, kernel_size=7,
                                        strides=2, padding=3, use_bias=False))
            self.features.add(norm_layer(**({} if norm_kwargs is None else norm_kwargs)))
            self.features.add(nn.Activation('relu'))
            self.features.add(nn.MaxPool2D(pool_size=3, strides=2, padding=1))
            # for循環添加dense blocks
            num_features = num_init_features
            # 例: block_config: [6, 12, 24, 16]
            for i, num_layers in enumerate(block_config):
                self.features.add(_make_dense_block(
                    # i+1 是 stage_index 只是用來設置網絡層名稱前綴的 
                    num_layers, bn_size, growth_rate, dropout, i+1, norm_layer, norm_kwargs))
                num_features = num_features + num_layers * growth_rate
                if i != len(block_config) - 1:
                    # 添加transition layer
                    self.features.add(_make_transition(num_features // 2, norm_layer, norm_kwargs))
                    num_features = num_features // 2 # 每次過一個transition layer之後 num_features會減半
            self.features.add(norm_layer(**({} if norm_kwargs is None else norm_kwargs)))
            self.features.add(nn.Activation('relu'))
            self.features.add(nn.AvgPool2D(pool_size=7))
            self.features.add(nn.Flatten())

            self.output = nn.Dense(classes) # 全連接層

    def hybrid_forward(self, F, x):
        x = self.features(x)
        x = self.output(x)
        return x

我覺得有必要說明下關於這個 block_config
在其餘部分有段代碼是:

# Specification
densenet_spec = {121: (64, 32, [6, 12, 24, 16]),    # [6, 12, 24, 16]指每個Dense Block中1*1與3*3組合模塊的數量 具體查看錶格
                 161: (96, 48, [6, 12, 36, 24]),
                 169: (64, 32, [6, 12, 32, 32]),
                 201: (64, 32, [6, 12, 48, 32])}
...
...
...
num_init_features, growth_rate, block_config = densenet_spec[num_layers]

這個地方就是寫了表格中四種版本的DenseNet的配置 根據121/161/169/201等num_layers去選擇, 獲得num_init_features, growth_rate, block_config 傳入類DenseNet

接下來就是看調用DenseNet類的地方了:

def get_densenet(num_layers, pretrained=False, ctx=cpu(),
                 root='~/.mxnet/models', **kwargs):
    r"""Densenet-BC model from the
    `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_ paper.

    Parameters
    ----------
    num_layers : int
        Number of layers for the variant of densenet. Options are 121, 161, 169, 201.
    pretrained : bool or str
        Boolean value controls whether to load the default pretrained weights for model.
        String value represents the hashtag for a certain version of pretrained weights.
    ctx : Context, default CPU
        The context in which to load the pretrained weights.
    root : str, default $MXNET_HOME/models
        Location for keeping the model parameters.
    norm_layer : object
        Normalization layer used (default: :class:`mxnet.gluon.nn.BatchNorm`)
        Can be :class:`mxnet.gluon.nn.BatchNorm` or :class:`mxnet.gluon.contrib.nn.SyncBatchNorm`.
    norm_kwargs : dict
        Additional `norm_layer` arguments, for example `num_devices=4`
        for :class:`mxnet.gluon.contrib.nn.SyncBatchNorm`.
    """
    num_init_features, growth_rate, block_config = densenet_spec[num_layers]    # 獲取指定版本的DenseNet
    net = DenseNet(num_init_features, growth_rate, block_config, **kwargs)
    if pretrained:  # 是否加載預訓練模型參數
        from .model_store import get_model_file
        net.load_parameters(get_model_file('densenet%d'%(num_layers),
                                           tag=pretrained, root=root), ctx=ctx)
        from ..data import ImageNet1kAttr
        attrib = ImageNet1kAttr()
        net.synset = attrib.synset
        net.classes = attrib.classes
        net.classes_long = attrib.classes_long
    return net

說實話我覺得這個函數要添加一個classes參數…我們自己訓練的時候classes不一定是1000啊…哦對 可以通過**kwargs設置

這個搞懂後之後就是很簡單的調用不同版本Densenet的函數了.
比如說

def densenet161(**kwargs):
    return get_densenet(161, **kwargs)

完整的代碼還請移步github查看 感覺gluoncv的代碼都挺通俗易懂的 有誤的地方還請大家指教
Densenet

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章