總則
神經網絡的典型訓練過程如下:
- 搭建網絡架構:定義具有可學習參數(或權重)的神經網絡
- 數據輸入:遍歷輸入數據集
- 計算ouputs:通過網絡處理輸入
- 計算損失
- BP:將梯度傳播回網絡參數
- 更新網絡權重:通常使用簡單的更新規則:權重=權重-learning_rate *梯度
1. 搭建網絡架構(Define the network)-計算output
用torch.nn定義網絡的參數結構,torch.nn.functional進行前向傳播運算,torch.autograd做後向傳播運算
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 3x3 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
output:
Net(
(conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
(fc1): Linear(in_features=576, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
用圖來表示網絡架構長這樣:
Net類中包含了三個部分:__init__函數定義卷積核和全連接層的性質(每個數字分別代表了什麼?);forward函數裏定義所有的張量操作。(後向傳播操作已經有autograd包指定,不需要自己定義)
代碼解析
conv2d
class torch.nn.Conv2d(in_channels, (in_channels,out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
- in_channels:輸入信號通道數
- out_channels:輸出通道數
- kerner_size(
int
ortuple
) - 卷積核的尺寸 - stride(
int
ortuple
,optional
) - 卷積步長 - padding(
int
ortuple
,optional
) - 輸入的每一條邊補充0的層數 - dilation(
int
ortuple
,optional
) – 卷積核元素之間的間距 - groups(
int
,optional
) – 從輸入通道到輸出通道的阻塞連接數 - bias(
bool
,optional
) - 如果bias=True
,添加偏置
linear
class torch.nn.Linear(in_features, out_features, bias=True)
對輸入數據做線性變換:y=Ax+by=Ax+b
- in_features - 每個輸入樣本的大小
- out_features - 每個輸出樣本的大小
- bias - 若設置爲False,這層不會學習偏置。默認值:True
relu:非線性激活函數
torch.nn.functional.relu(input, inplace=False)
max_pool2d
torch.nn.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
第一個pooling核大小爲2*2,默認步長爲0;第二個核大小爲
net.parameters()返回模型參數
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
output:
10
torch.Size([6, 1, 3, 3])
使用網絡的例子
假設輸入隨機的32*32矩陣,out=net(input)表示用剛剛定義的網絡計算出out。
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
後向傳播之前,先用net.zero_grad()把參數梯度存儲清空。
net.zero_grad()
out.backward(torch.randn(1, 10))
注:torch.nn僅支持mini-batch輸入。nn.Conv2d輸入爲4D張量,nSamples x nChannels x Height x Width
2. 損失函數
計算目標量和網絡輸出量的誤差,有很多種定義。包中自帶的其中一種爲MES loss均方差損失。
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
Out:
tensor(0.6285, grad_fn=<MseLossBackward>)
由此我們計算出了output和target的誤差,名爲loss。
3. 後向傳播
基於loss調用後向傳播函數backward()。下面的代碼查看了conv1在後向傳播前後的偏差bias的變化,一開始0。
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
4. 更新參數
torch.optim
隨機梯度下降( Stochastic Gradient Descent ,SGD)是最實用簡單的更新法則。
weight = weight - learning_rate * gradient
如果直接根據上述原理寫代碼,可以寫作如下形式:
learning_rate=0.01;
for f in net.parameters():
f.data.sub_(f.grad.data*learning_rate)
但這樣在網絡裏面比較難用,所以有package叫torch.optim,包含了SGD在內的各種參數更新方法。
import torch.optim as optim
# create your optimizer
optimizer=optim.SGD(net.parameters(),lr=0.01)
# in your training loop:o
optimizer.zero_grad() #把梯度緩存清爲0
out=net(input)
loss=criterion(output,target)
loss.backward()
optimizer.step() #執行更新