Pytorch基础:神经网络和优化器

Pytorch基础:神经网络和优化器

torch.nn是为神经网络设计的模块化接口。nn构建与autograd上,可用来定义和运行神经网络
nn.functional是神经网络中使用的一些常用的函数,(不具有可学习参数,如ReLU、pool、DropOut等)

# 导入相关包
import torch
import torch.nn as nn  # 一般设置别名为nn
import torch.nn.functional as F  # 一般设置别名为F

定义一个网络

Pytorch中已准备好的了现有的网络模型,只要继承nn.Module类,并实现forward方法。Pytorch会根据autograd,自动实现backward函数,在forward函数中可使用任何tensor支持的操作及Python语法

class Net(nn.Module):
    def __init__(self):
        # nn.Module字类函数必须在构建函数中执行父类的构造函数
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 3)  # 卷积层, 1为单通道, 6为输出通道, 3为卷积核3x3
        self.fc1 = nn.Linear(1350, 10)

    # 正向传播
    def forward(self, x):
        print(x.size())  # 结果:[1, 1, 32, 32]
        x = self.conv1(x)  #根据卷积的尺寸计算公式,计算结果为30
        x = F.relu(x)
        print(x.size())  # 结果:[1, 6, 30, 30]
        x = F.max_pool2d(x, (2, 2))  # 池化层, 计算结果为15
        x = F.relu(x)
        print(x.size())  # 结果:[1, 6, 15, 15]
        x = x.view(x.size()[0], -1)
        # -1表示自适应,该操作是把[1, 6, 15, 15]压扁,变为[-1, 1350]
        print(x.size())
        x = self.fc1(x)
        return x


net = Net()
print(net)
Net(
  (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
  (fc1): Linear(in_features=1350, out_features=10, bias=True)
)

网络的科学系参数通过.parameters()返回

for param in net.parameters():
    print(param)
Parameter containing:
tensor([[[[-0.1501,  0.0207, -0.2991],
          [ 0.1171,  0.0988,  0.0631],
          [ 0.2022, -0.1330, -0.2333]]],


        [[[ 0.2957, -0.2145, -0.2514],
          [ 0.1999, -0.0470, -0.0605],
          [ 0.2975,  0.1932,  0.0635]]],


        [[[ 0.1194, -0.2086, -0.1382],
          [ 0.0685,  0.1700, -0.1252],
          [-0.3048, -0.0106,  0.1005]]],


        [[[ 0.3157,  0.3140, -0.1614],
          [ 0.1859, -0.2659, -0.1587],
          [-0.2780, -0.2142, -0.0624]]],


        [[[ 0.2214,  0.1233,  0.1699],
          [-0.2489, -0.1493, -0.3306],
          [ 0.2730,  0.1064, -0.0716]]],


        [[[ 0.3102,  0.2241, -0.2976],
          [ 0.0525, -0.0518,  0.1736],
          [ 0.2654,  0.3064,  0.3140]]]], requires_grad=True)
Parameter containing:
tensor([-0.2208, -0.1180, -0.1639, -0.0986,  0.1076,  0.0020],
       requires_grad=True)
Parameter containing:
tensor([[ 0.0004,  0.0112,  0.0163,  ..., -0.0033, -0.0175,  0.0021],
        [-0.0188,  0.0177, -0.0196,  ..., -0.0163, -0.0052, -0.0001],
        [-0.0009, -0.0209,  0.0002,  ...,  0.0217, -0.0135,  0.0113],
        ...,
        [-0.0246, -0.0269,  0.0255,  ...,  0.0067, -0.0116, -0.0021],
        [ 0.0222,  0.0139,  0.0108,  ..., -0.0138,  0.0266,  0.0183],
        [ 0.0195, -0.0110, -0.0210,  ...,  0.0056, -0.0081,  0.0261]],
       requires_grad=True)
Parameter containing:
tensor([ 0.0119, -0.0075,  0.0034, -0.0180, -0.0205, -0.0038,  0.0109, -0.0236,
         0.0165,  0.0253], requires_grad=True)
# net.named_parameters可同时返回参数及名称
for name, param in net.named_parameters():
    print('name: {}, parameters: {}'.format(name, param.size()))
name: conv1.weight, parameters: torch.Size([6, 1, 3, 3])
name: conv1.bias, parameters: torch.Size([6])
name: fc1.weight, parameters: torch.Size([10, 1350])
name: fc1.bias, parameters: torch.Size([10])

forward函数输入和输出都是Tensor

inputs = torch.randn(1, 1, 32, 32)
outputs = net(inputs)
outputs.size()
torch.Size([1, 1, 32, 32])
torch.Size([1, 6, 30, 30])
torch.Size([1, 6, 15, 15])
torch.Size([1, 1350])





torch.Size([1, 10])
inputs.size()
torch.Size([1, 1, 32, 32])

反向传播前,首先要将所有的梯度清零
反向传播是Pytorch自动实现的,只需调用.backward函数即可

net.zero_grad()
outputs.backward(torch.ones(1, 10))

torch.nn只支持batch,不支持一次只输入一个样本。

# nn中预设了常用的损失函数
y = torch.arange(0, 10).view(1, 10).float()
criterion = nn.MSELoss()
loss = criterion(outputs, y)
loss.item()
26.876943588256836

优化器

反向传播计算完所有梯度后,还需要使用优化方法来更新网络的权重和参数。例如随机梯度下降
weight = weight - learning_rate * gradient
torch.optim中实现了大多数优化方法

import torch.optim as optim

out = net(inputs)
criterion = nn.MSELoss()
loss = criterion(out, y)
optimizer = optim.SGD(net.parameters(), lr=0.01)  # 新建优化器,SGD只需调整参数和学习率
optimizer.zero_grad()  # 梯度清零
loss.backward()
optimizer.step()  # 更新参数
torch.Size([1, 1, 32, 32])
torch.Size([1, 6, 30, 30])
torch.Size([1, 6, 15, 15])
torch.Size([1, 1350])
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章