Pytorch_Neuarl Networks

Neuarl Networks

使用torch.nn.Module來構建神經網絡
一個nn.Module包含了各個層和一個forward方法,返回ouput

  1. 前饋神經網絡:接受一個輸入,然後一層一層地傳遞,最後輸出計算結果

訓練過程:

  1. 定義包含科學系參數(權重)神經網絡
  2. 在數據集上迭代
  3. 通過神經網絡處理輸入
  4. 計算損失(輸出結果和正確值差值大小)
  5. 將梯度反方向傳播會網絡的參數
  6. 更新網絡的參數: weight = weight - learning_rate * gradient
# 定義網絡
import torch
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # 1 輸入通道, 6 輸出通道, 5x5卷積核
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = F.max_pool2d(F.relu(self.conv1(x)), (2,2))
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]
        num_features = 1
        for s in size:
            num_features *= s
        return num_features


if __name__ == '__main__':
    model = Net()
    print(model)
Net(
  (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  (fc1): Linear(in_features=400, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)

模型中必須定義forward函數,backward函數(用來計算梯度)會被autograd自動創建,可以在forward函數中使用任何對Tensor的操作

# net.parameters() 返回可被學習的參數(權重)列表和值
params = list(model.parameters())
print(len(params))
print(params[0].size())
10
torch.Size([6, 1, 5, 5])
input = torch.randn(1, 1, 32, 32)
out = model(input)
print(out)
tensor([[-0.0278, -0.0444, -0.0038,  0.0215,  0.0452,  0.0473,  0.0622, -0.1564,
          0.0825,  0.0806]], grad_fn=<AddmmBackward>)
# 將所有的參數的梯度緩存清零,然後進行隨機梯度的反向傳播
model.zero_grad()
out.backward(torch.randn(1, 10))

損失函數

一個損失函數接受一對(output, target)作爲收入,計算一個值估計網絡的輸出和目標值相差多少
nn包中有很多不同的損失函數,nn.MSELoss是一個比較簡單的損失函數,它計算輸出值和目標間的均方誤差
詳情:address

output = model(input)
target = torch.randn(10)
target = target.view(1, -1)
criterion = nn.MSELoss()

loss = criterion(output, target)
print(loss)
tensor(0.5557, grad_fn=<MseLossBackward>)
print(loss.grad_fn)  # MSELoss
print(loss.grad_fn.next_functions[0][0])  # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0])  # ReLU
<MseLossBackward object at 0x000001B89A3AE0C8>
<AddmmBackward object at 0x000001B8988909C8>
<AccumulateGrad object at 0x000001B89A308DC8>

反向傳播

調用loss.backward獲得反向傳播的誤差
在調用前需要清楚已存在的梯度,否則梯度將被累加到已存在的梯度

model.zero_grad()  # 清除梯度
print('conv1.bias.grad before backward', model.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward', model.conv1.bias.grad)
conv1.bias.grad before backward tensor([0., 0., 0., 0., 0., 0.])
conv1.bias.grad after backward tensor([-0.0019,  0.0015, -0.0030, -0.0074,  0.0009, -0.0004])

更新權重
權重更新最簡單的規則是隨機梯度下降(SGD)

learning_rate = 0.01  
for f in model.parameters():  
    f.data.sub_(f.grad.data * learning_rate)  
import torch.optim as optim

optimizer = optim.SGD(model.parameters(), lr=0.01)
optimizer.zero_grad()
output = model(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章