PyTorch之利用預訓練模型進行Fine-tuning教程

在Deep Learning領域,很多子領域的應用,比如一些動物識別,食物的識別等,公開的可用的數據庫相對於ImageNet等數據庫而言,其規模太小了,無法利用深度網絡模型直接train from scratch,容易引起過擬合,這時就需要把一些在大規模數據庫上已經訓練完成的模型拿過來,在目標數據庫上直接進行Fine-tuning(微調),這個已經經過訓練的模型對於目標數據集而言,只是一種相對較好的參數初始化方法而已,尤其是大數據集與目標數據集結構比較相似的話,經過在目標數據集上微調能夠得到不錯的效果。

Fine-tune預訓練網絡的步驟

1. 首先更改預訓練模型分類層全連接層的數目,因爲一般目標數據集的類別數與大規模數據庫的類別數不一致,更改爲目標數據集上訓練集的類別數目即可,一致的話則無需更改;

2. 把分類器前的網絡的所有層的參數固定,即不讓它們參與學習,不進行反向傳播,只訓練分類層的網絡,這時學習率可以設置的大一點,如是原來初始學習率的10倍或幾倍或0.01等,這時候網絡訓練的比較快,因爲除了分類層,其它層不需要進行反向傳播,可以多嘗試不同的學習率設置。

3.接下來是設置相對較小的學習率,對整個網絡進行訓練,這時網絡訓練變慢啦。

下面對利用PyTorch深度學習框架Fine-tune預訓練網絡的過程中涉及到的固定可學習參數,對不同的層設置不同的學習率等進行詳細講解。

1. PyTorch對某些層固定網絡的可學習參數的方法:

class Net(nn.Module):

    def __init__(self, num_classes=546):
        super(Net, self).__init__()
        self.features = nn.Sequential(

            nn.Conv2d(1, 64, kernel_size=3, stride=2, padding=1),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),

            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),

        )

        self.Conv1_1 = nn.Sequential(

            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),

            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(64),
        )

 	for p in self.parameters():
            p.requires_grad=False

        self.Conv1_2 = nn.Sequential(

            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),

            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(64),

        )

如上述代碼,則模型Net網絡中self.features與self.Conv1_1層中的參數便是固定,不可學習的。這主要看代碼:

for p in self.parameters():
    p.requires_grad=False

插入的位置,這段代碼前的所有層的參數是不可學習的,也就沒有反向傳播過程。也可以指定某一層的參數不可學習,如下:

for p in  self.features.parameters():
    p.requires_grad=False

則 self.features層所有參數均是不可學習的。

注意,上述代碼設置若要真正生效,在訓練網絡時需要在設置優化器如下:

 optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), args.lr,
                                momentum=args.momentum,
                                weight_decay=args.weight_decay)

2. PyTorch之爲不同的層設置不同的學習率

model = Net()
conv1_2_params = list(map(id, model.Conv1_2.parameters()))
base_params = filter(lambda p: id(p) not in conv1_2_params,
                     model.parameters())
optimizer = torch.optim.SGD([
            {'params': base_params},
            {'params': model.Conv1_2.parameters(), 'lr': 10 * args.lr}], args.lr,             
            momentum=args.momentum, weight_decay=args.weight_decay)

上述代碼表示將模型Net網絡的 self.Conv1_2層的學習率設置爲傳入學習率的10倍,base_params的學習沒有明確設置,則默認爲傳入的學習率args.lr。注意:

[{'params': base_params}, {'params': model.Conv1_2.parameters(), 'lr': 10 * args.lr}]

表示爲列表中的字典結構。

這種方法設置不同的學習率顯得不夠靈活,可以爲不同的層設置靈活的學習率,可以採用如下方法在adjust_learning_rate函數中設置:

def adjust_learning_rate(optimizer, epoch, args):
    lre = []
    lre.extend([0.01] * 10)
    lre.extend([0.005] * 10)
    lre.extend([0.0025] * 10)
    lr = lre[epoch]
    optimizer.param_groups[0]['lr'] = 0.9 * lr
    optimizer.param_groups[1]['lr'] = 10 * lr
    print(param_group[0]['lr'])
    print(param_group[1]['lr'])

上述代碼中的optimizer.param_groups[0]就代表[{'params': base_params}, {'params': model.Conv1_2.parameters(), 'lr': 10 * args.lr}]中的'params': base_params},optimizer.param_groups[1]代表{'params': model.Conv1_2.parameters(), 'lr': 10 * args.lr},這裏設置的學習率會把args.lr給覆蓋掉,個人認爲上述代碼在設置學習率方面更靈活一些。上述代碼也可如下變成實現(注意學習率隨便設置的,未與上述代碼保持一致):

def adjust_learning_rate(optimizer, epoch, args):
    lre = np.logspace(-2, -4, 40)
    lr = lre[epoch]
    for i in range(len(optimizer.param_groups)):
        param_group = optimizer.param_groups[i]
        if i == 0:
            param_group['lr'] = 0.9 * lr
        else:
            param_group['lr'] = 10 * lr
        print(param_group['lr'])

下面貼出SGD優化器的PyTorch實現,及其每個參數的設置和表示意義,具體如下:

import torch
from .optimizer import Optimizer, required


class SGD(Optimizer):
    r"""Implements stochastic gradient descent (optionally with momentum).

    Nesterov momentum is based on the formula from
    `On the importance of initialization and momentum in deep learning`__.

    Args:
        params (iterable): iterable of parameters to optimize or dicts defining
            parameter groups
        lr (float): learning rate
        momentum (float, optional): momentum factor (default: 0)
        weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
        dampening (float, optional): dampening for momentum (default: 0)
        nesterov (bool, optional): enables Nesterov momentum (default: False)

    Example:
        >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
        >>> optimizer.zero_grad()
        >>> loss_fn(model(input), target).backward()
        >>> optimizer.step()

    __ http://www.cs.toronto.edu/%7Ehinton/absps/momentum.pdf

    .. note::
        The implementation of SGD with Momentum/Nesterov subtly differs from
        Sutskever et. al. and implementations in some other frameworks.

        Considering the specific case of Momentum, the update can be written as

        .. math::
                  v = \rho * v + g \\
                  p = p - lr * v

        where p, g, v and :math:`\rho` denote the parameters, gradient,
        velocity, and momentum respectively.

        This is in contrast to Sutskever et. al. and
        other frameworks which employ an update of the form

        .. math::
             v = \rho * v + lr * g \\
             p = p - v

        The Nesterov version is analogously modified.
    """

    def __init__(self, params, lr=required, momentum=0, dampening=0,
                 weight_decay=0, nesterov=False):
        if lr is not required and lr < 0.0:
            raise ValueError("Invalid learning rate: {}".format(lr))
        if momentum < 0.0:
            raise ValueError("Invalid momentum value: {}".format(momentum))
        if weight_decay < 0.0:
            raise ValueError("Invalid weight_decay value: {}".format(weight_decay))

        defaults = dict(lr=lr, momentum=momentum, dampening=dampening,
                        weight_decay=weight_decay, nesterov=nesterov)
        if nesterov and (momentum <= 0 or dampening != 0):
            raise ValueError("Nesterov momentum requires a momentum and zero dampening")
        super(SGD, self).__init__(params, defaults)

    def __setstate__(self, state):
        super(SGD, self).__setstate__(state)
        for group in self.param_groups:
            group.setdefault('nesterov', False)

    def step(self, closure=None):
        """Performs a single optimization step.

        Arguments:
            closure (callable, optional): A closure that reevaluates the model
                and returns the loss.
        """
        loss = None
        if closure is not None:
            loss = closure()

        for group in self.param_groups:
            weight_decay = group['weight_decay']
            momentum = group['momentum']
            dampening = group['dampening']
            nesterov = group['nesterov']

            for p in group['params']:
                if p.grad is None:
                    continue
                d_p = p.grad.data
                if weight_decay != 0:
                    d_p.add_(weight_decay, p.data)
                if momentum != 0:
                    param_state = self.state[p]
                    if 'momentum_buffer' not in param_state:
                        buf = param_state['momentum_buffer'] = torch.zeros_like(p.data)
                        buf.mul_(momentum).add_(d_p)
                    else:
                        buf = param_state['momentum_buffer']
                        buf.mul_(momentum).add_(1 - dampening, d_p)
                    if nesterov:
                        d_p = d_p.add(momentum, buf)
                    else:
                        d_p = buf

                p.data.add_(-group['lr'], d_p)

        return loss

經驗總結:在Fine-tuning時最好不要隔層設置層的參數的可學習與否,這樣做一般效果餅不理想,一般準則即可,即先Fine-tuning分類層,學習率設置的大一些,然後在將整個網絡設置一個較小的學習率,所有層一起訓練。至於不先經過Fine-tune分類層,而是將整個網絡所有層一起訓練,只是分類層的學習率相對設置大一些,這樣做也可以,至於哪個效果更好,沒評估過。當用三元組損失(triplet loss)微調用softmax loss訓練的網絡時,可以設置階梯型的較小學習率,整個網絡所有層一起訓練,效果比較好,而不用先Fine-tune分類層前一層的輸出。

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章