深度學習框架PyTorch:入門與實踐 學習(一)

從今天起要從頭開始學習PyTorch了,在此記下筆記。

PyTorch 入門第一步

Tensor

import Torch as t
x = t.tensor(5,3)#生成5*3的矩陣
print(x.size())# 輸出x的維度
print(x.size()[0]) #輸出x的第0維
print(x.size(0))# 輸出x的第0維

y = t.rand(5,3)# 生成0-1之間的隨機數矩陣
# 加法
x + y
t.add(x,y)
result = t.rand(5,3)
t.add(x,y,out=result)

inplace

y.add(x)#不改變y的值
y.add_(x)# 改變y的值

函數名帶下劃線的會修改Tensor本身,x.add(y),x.t()會返回一個新的Tensor,而x不變,但x.add_(y)改變x。

 

Tensor 與 numpy

tensor轉numpy

a = t.tensor(5,3)
b = a.numpy()

numpy 轉 tensor

import numpy as np
a = np.ones(5)
b = t.from_numpy(a)

轉換之後,因爲tensor和numpy共享內存,所以一個改變另一個也會被改變。

AutoGrad自動微分

Variable

data:tensor

grad:存梯度,也是一個variable

gradfn:function,反向傳播計算輸入的梯度

from torch.autograd import Variable
x = Variable(t.ones(2,2),require_grad=True)
y = x.sum()
y.backward()
print(x.grad)
# 1, 1
# 1, 1

梯度在反向傳播過程中是累加的,故每次反向傳播都會累加之前的梯度,所以反向傳播之前要把梯度清0.

x.grad.data.zero_()

神經網絡

定義網絡

  1. 繼承nn.Module
  2. 實現nn.Module中forward方法
  3. 把具有可學習參數的層放在構造函數__init()__中(不可學習參數的層可放可不放)
import torch.nn as nn
import torch
import torch.nn.functional as F
from torch.autograd import Variable

class MyNet(nn.Module):
    def __init__(self):
        # nn.Module子類的函數必須在構造函數中執行父類的構造函數
        super(MyNet, self).__init__()
        self.first = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3)
        self.second = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3)
        self.Linear1 = nn.Linear(in_features=32*12*12, out_features=20)
        self.Linear2 = nn.Linear(in_features=20, out_features=1)

    def forward(self, x):
        x = self.first(x)
        x = F.relu(x)
        x = self.second(x)
        x = F.relu(x)
        x = x.view(x.size()[0], -1)
        x = self.Linear1(x)
        x = self.Linear2(x)
        return x

Net = MyNet()
print(Net)

定義了forward函數之後,backward會被自動實現。

# 網絡參數
params = list(Net.parameters())
print(len(params))
# Net.named_parameters()可返回可學習的參數和名稱
for name, parameters in Net.named_parameters():
    print(name, ":", parameters.size())

forward函數的輸入輸出均是Variable

input = Variable(torch.rand((1, 3, 16, 16)))
out = Net(input)
print(out.size())

torch.nn只支持mini-natch,故輸入必須是4維的,batch \times channel \times height \times width

損失函數

target = torch.rand((1, 1))
criterion = nn.MSELoss()
loss = criterion(out, target)
print(loss)

Net.zero_grad()
print("反向傳播之前的梯度:", Net.first.bias.grad)
loss.backward()
print("反向傳播之後的梯度:", Net.first.bias.grad)

優化器

import torch.optim as optim
optimizer = optim.Adam(Net.parameters, lr=0.01)
# 梯度先清零
optimizer.zero_grad()
out = Net(input)
print(out.size())

target = torch.rand((1, 1))
criterion = nn.MSELoss()
loss = criterion(out, target)
loss.backward()
# 更新參數
optimizer.step()

數據加載與預處理

daraloader是可迭代的對象,將dataset返回的每一個樣本拼接成一個batch,並提供多行程加速優化和代碼打亂等。程序對dataset遍歷完一遍之後,對dataloader也完成了一次迭代。

import torchvision as tv
import torch as t
import torchvision.transforms as transforms
from torchvision.transforms import ToPILImage
import torch.nn as nn
import torch
import torch.nn.functional as F
from torch.autograd import Variable

transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5)),])
trainset = tv.datasets.CIFAR10(
    root='F:/code/EDSR-PyTorch-master-png/',
    train=True, download=True, transform=transform)
trainloader = t.utils.data.DataLoader(
    trainset,
    batch_size=4,
    shuffle=True,
    num_workers=0
)
testset = tv.datasets.CIFAR10(
    root='F:/code/EDSR-PyTorch-master-png/',
    train=False, download=True, transform=transform)
testloader = t.utils.data.DataLoader(
    testset,
    batch_size=4,
    shuffle=False,
    num_workers=0
)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章