Pytorch的load方法和load_state_dict方法只能較爲固定的讀入參數文件,他們要求讀入的state_dict的key和Model.state_dict()的key對應相等。
而我們在進行遷移學習的過程中也許只需要使用某個預訓練網絡的一部分,把多個網絡拼和成一個網絡,或者爲了得到中間層的輸出而分離預訓練模型中的Sequential 等等,這些情況下。傳統的load方法就不是很有效了。
例如,我們想利用Mobilenet的前7個卷積並把這幾層凍結,後面的部分接別的結構,或者改寫成FCN結構,傳統的方法就不奏效了。
最普適的方法是:構建一個字典,使得字典的keys和我們自己創建的網絡相同,我們再從各種預訓練網絡把想要的參數對着新的keys填進去就可以有一個新的state_dict了,這樣我們就可以load這個新的state_dict,目前只能想到這個方法應對較爲複雜的網絡變換。
網上查“載入部分模型”,“凍結部分模型”一般都是隻改個FC,根本沒有用,初學的時候自己寫state_dict也踩了一些坑,發出來記錄一下。
一.載入部分預訓練參數
我們先看看Mobilenet的結構
( 來源github,附帶預訓練模型mobilenet_sgd_rmsprop_69.526.tar)
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
-
- def conv_bn(inp, oup, stride):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
- nn.BatchNorm2d(oup),
- nn.ReLU(inplace=True)
- )
-
- def conv_dw(inp, oup, stride):
- return nn.Sequential(
- nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
- nn.BatchNorm2d(inp),
- nn.ReLU(inplace=True),
-
- nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
- nn.BatchNorm2d(oup),
- nn.ReLU(inplace=True),
- )
-
- self.model = nn.Sequential(
- conv_bn( 3, 32, 2),
- conv_dw( 32, 64, 1),
- conv_dw( 64, 128, 2),
- conv_dw(128, 128, 1),
- conv_dw(128, 256, 2),
- conv_dw(256, 256, 1),
- conv_dw(256, 512, 2),
- conv_dw(512, 512, 1),
- conv_dw(512, 512, 1),
- conv_dw(512, 512, 1),
- conv_dw(512, 512, 1),
- conv_dw(512, 512, 1),
- conv_dw(512, 1024, 2),
- conv_dw(1024, 1024, 1),
- nn.AvgPool2d(7),
- )
- self.fc = nn.Linear(1024, 1000)
-
- def forward(self, x):
- x = self.model(x)
- x = x.view(-1, 1024)
- x = self.fc(x)
- return x
我們只需要前7層卷積,並且爲了方便日後concate操作,我們把Sequential拆開,成爲下面的樣子
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
-
- def conv_bn(inp, oup, stride):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
- nn.BatchNorm2d(oup),
- nn.ReLU(inplace=True)
- )
-
- def conv_dw(inp, oup, stride):
- return nn.Sequential(
- nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
- nn.BatchNorm2d(inp),
- nn.ReLU(inplace=True),
-
- nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
- nn.BatchNorm2d(oup),
- nn.ReLU(inplace=True),
- )
-
- self.conv1 = conv_bn( 3, 32, 2)
- self.conv2 = conv_dw( 32, 64, 1)
- self.conv3 = conv_dw( 64, 128, 2)
- self.conv4 = conv_dw(128, 128, 1)
- self.conv5 = conv_dw(128, 256, 2)
- self.conv6 = conv_dw(256, 256, 1)
- self.conv7 = conv_dw(256, 512, 2)
-
- # 原來這些不要了
- # 可以自己接後面的結構
- '''
- self.features = nn.Sequential(
- conv_dw(512, 512, 1),
- conv_dw(512, 512, 1),
- conv_dw(512, 512, 1),
- conv_dw(512, 512, 1),
- conv_dw(512, 512, 1),
- conv_dw(512, 1024, 2),
- conv_dw(1024, 1024, 1),
- nn.AvgPool2d(7),)
-
- self.fc = nn.Linear(1024, 1000)
- '''
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv2(x1)
- x3 = self.conv3(x2)
- x4 = self.conv4(x3)
- x5 = self.conv5(x4)
- x6 = self.conv6(x5)
- x7 = self.conv7(x6)
- #x8 = self.features(x7)
- #out = self.fc
- return (x1,x2,x3,x4,x4,x6,x7)
我們更具改過的結構創建一個net,看看他的state_dict和我們預訓練文件的state_dict有啥區別
- net = Net()
- #我的電腦沒有GPU,他的參數是GPU訓練的cudatensor,於是要下面這樣轉換一下
- dict_trained = torch.load("mobilenet_sgd_rmsprop_69.526.tar",map_location=lambda storage, loc: storage)["state_dict"]
- dict_new = net.state_dict().copy()
-
- new_list = list (net.state_dict().keys() )
- trained_list = list (dict_trained.keys() )
- print("new_state_dict size: {} trained state_dict size: {}".format(len(new_list),len(trained_list)) )
- print("New state_dict first 10th parameters names")
- print(new_list[:10])
- print("trained state_dict first 10th parameters names")
- print(trained_list[:10])
-
- print(type(dict_new))
- print(type(dict_trained))
得到輸出如下:
我們截斷一半之後,參數由137變成65了,前十個參數看出,名字變了但是順序其實沒變。state_dict的數據類型是Odict,可以按照dict的操作方法操作。
new_state_dict size: 65 trained state_dict size: 137
New state_dict first 10th parameters names
['conv1.0.weight', 'conv1.1.weight', 'conv1.1.bias', 'conv1.1.running_mean', 'conv1.1.running_var', 'conv2.0.weight', 'conv2.1.weight', 'conv2.1.bias', 'conv2.1.running_mean', 'conv2.1.running_var']
trained state_dict first 10th parameters names
['module.model.0.0.weight', 'module.model.0.1.weight', 'module.model.0.1.bias', 'module.model.0.1.running_mean', 'module.model.0.1.running_var', 'module.model.1.0.weight', 'module.model.1.1.weight', 'module.model.1.1.bias', 'module.model.1.1.running_mean', 'module.model.1.1.running_var']
<class 'collections.OrderedDict'>
<class 'collections.OrderedDict'>
我們看出只要構建一個字典,使得字典的keys和我們自己創建的網絡相同,我們在從各種預訓練網絡把想要的參數對着新的keys填進去就可以有一個新的state_dict了,這樣我們就可以load這個新的state_dict,這是最普適的方法適用於所有的網絡變化。
- for i in range(65):
- dict_new[ new_list[i] ] = dict_trained[ trained_list[i] ]
-
- net.load_state_dict(dict_new)
還有別的情況,比如我們只是在後面加了一些層,沒有改變原來網絡層的名字和結構,可以用下面的簡便方法:
loaded_dict = {k: loaded_dict[k] for k, _ in model.state_dict()}
二.凍結這幾層參數
方法很多,這裏用和上面方法對應的凍結方法
- 發現之前的凍結有問題,還是建議看一下
- https://discuss.pytorch.org/t/how-the-pytorch-freeze-network-in-some-layers-only-the-rest-of-the-training/7088
- 或者
- https://discuss.pytorch.org/t/correct-way-to-freeze-layers/26714
- 或者
對應的,在訓練時候,optimizer裏面只能更新requires_grad = True的參數,於是
optimizer = torch.optim.Adam( filter(lambda p: p.requires_grad, net.parameters(),lr) )
part two 我的用法:
首先介紹一下我的需求,我的是:
先訓練一個網絡,然後再構建一個網絡:new_model + older_model,即後面的older_model借用前面訓練好的網絡參數,並在後續訓練中進行凍結,不進行梯度更新。梯度更新僅存在於前面的new_model網絡。
參考瞭如下資料:
- I have some confusion regarding the correct way to freeze layers.
- Suppose I have the following NN: layer1, layer2, layer3
- I want to freeze the weights of layer2, and only update layer1 and layer3.
- Based on other threads, I am aware of the following ways of achieving this goal.
-
- Method 1:
- optim = {layer1, layer3}
- compute loss
- loss.backward()
- optim.step()
-
- Method 2:
- layer2_requires_grad=False
- optim = {all layers with requires_grad = True}
- compute loss
- loss.backward()
- optim.step()
-
- Method 3:
- optim = {layer1, layer2, layer3}
- layer2_old_weights = layer2.weight (this saves layer2 weights to a variable)
- compute loss
- loss.backward()
- optim.step()
- layer2.weight = layer2_old_weights (this sets layer2 weights to old weights)
-
- Method 4:
- optim = {layer1, layer2, layer3}
- compute loss
- loss.backward()
- set layer2 gradients to 0
- optim.step()
-
- My questions:
- Should we get different results for each method?
- Is any of these methods wrong?
- Is there a preferred method?
最後直接將先前訓練的網絡保存下來,在後續定義網絡時,加載之前保存的模型,並直接作如下設置就可以了:
param.requires_grad = False