深度学习框架PyTorch:入门与实践 学习(一)

从今天起要从头开始学习PyTorch了,在此记下笔记。

PyTorch 入门第一步

Tensor

import Torch as t
x = t.tensor(5,3)#生成5*3的矩阵
print(x.size())# 输出x的维度
print(x.size()[0]) #输出x的第0维
print(x.size(0))# 输出x的第0维

y = t.rand(5,3)# 生成0-1之间的随机数矩阵
# 加法
x + y
t.add(x,y)
result = t.rand(5,3)
t.add(x,y,out=result)

inplace

y.add(x)#不改变y的值
y.add_(x)# 改变y的值

函数名带下划线的会修改Tensor本身,x.add(y),x.t()会返回一个新的Tensor,而x不变,但x.add_(y)改变x。

 

Tensor 与 numpy

tensor转numpy

a = t.tensor(5,3)
b = a.numpy()

numpy 转 tensor

import numpy as np
a = np.ones(5)
b = t.from_numpy(a)

转换之后,因为tensor和numpy共享内存,所以一个改变另一个也会被改变。

AutoGrad自动微分

Variable

data:tensor

grad:存梯度,也是一个variable

gradfn:function,反向传播计算输入的梯度

from torch.autograd import Variable
x = Variable(t.ones(2,2),require_grad=True)
y = x.sum()
y.backward()
print(x.grad)
# 1, 1
# 1, 1

梯度在反向传播过程中是累加的,故每次反向传播都会累加之前的梯度,所以反向传播之前要把梯度清0.

x.grad.data.zero_()

神经网络

定义网络

  1. 继承nn.Module
  2. 实现nn.Module中forward方法
  3. 把具有可学习参数的层放在构造函数__init()__中(不可学习参数的层可放可不放)
import torch.nn as nn
import torch
import torch.nn.functional as F
from torch.autograd import Variable

class MyNet(nn.Module):
    def __init__(self):
        # nn.Module子类的函数必须在构造函数中执行父类的构造函数
        super(MyNet, self).__init__()
        self.first = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3)
        self.second = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3)
        self.Linear1 = nn.Linear(in_features=32*12*12, out_features=20)
        self.Linear2 = nn.Linear(in_features=20, out_features=1)

    def forward(self, x):
        x = self.first(x)
        x = F.relu(x)
        x = self.second(x)
        x = F.relu(x)
        x = x.view(x.size()[0], -1)
        x = self.Linear1(x)
        x = self.Linear2(x)
        return x

Net = MyNet()
print(Net)

定义了forward函数之后,backward会被自动实现。

# 网络参数
params = list(Net.parameters())
print(len(params))
# Net.named_parameters()可返回可学习的参数和名称
for name, parameters in Net.named_parameters():
    print(name, ":", parameters.size())

forward函数的输入输出均是Variable

input = Variable(torch.rand((1, 3, 16, 16)))
out = Net(input)
print(out.size())

torch.nn只支持mini-natch,故输入必须是4维的,batch \times channel \times height \times width

损失函数

target = torch.rand((1, 1))
criterion = nn.MSELoss()
loss = criterion(out, target)
print(loss)

Net.zero_grad()
print("反向传播之前的梯度:", Net.first.bias.grad)
loss.backward()
print("反向传播之后的梯度:", Net.first.bias.grad)

优化器

import torch.optim as optim
optimizer = optim.Adam(Net.parameters, lr=0.01)
# 梯度先清零
optimizer.zero_grad()
out = Net(input)
print(out.size())

target = torch.rand((1, 1))
criterion = nn.MSELoss()
loss = criterion(out, target)
loss.backward()
# 更新参数
optimizer.step()

数据加载与预处理

daraloader是可迭代的对象,将dataset返回的每一个样本拼接成一个batch,并提供多行程加速优化和代码打乱等。程序对dataset遍历完一遍之后,对dataloader也完成了一次迭代。

import torchvision as tv
import torch as t
import torchvision.transforms as transforms
from torchvision.transforms import ToPILImage
import torch.nn as nn
import torch
import torch.nn.functional as F
from torch.autograd import Variable

transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5)),])
trainset = tv.datasets.CIFAR10(
    root='F:/code/EDSR-PyTorch-master-png/',
    train=True, download=True, transform=transform)
trainloader = t.utils.data.DataLoader(
    trainset,
    batch_size=4,
    shuffle=True,
    num_workers=0
)
testset = tv.datasets.CIFAR10(
    root='F:/code/EDSR-PyTorch-master-png/',
    train=False, download=True, transform=transform)
testloader = t.utils.data.DataLoader(
    testset,
    batch_size=4,
    shuffle=False,
    num_workers=0
)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章