手把手教你用Pytorch構造模型(GPU版和CPU版)

本文的示例代碼爲GPU版,如若需要CPU版代碼,請自行刪去代碼中關於device的部分。


一個全連接神經網絡就想當於是一個多層感知機,接下來我將從零開始構造一個全連接神經網絡,包括:結點參數,損失函數,優化方法等。

由於大部分模型都能在傳統的MNIST數據集上取得比較好的分類性能,我將使用Fashion-MNIST數據集來進行我們的實驗。這個數據集一共包含10個類別,數據集大小隻有幾十M。

我們在導入函數時一律將數據轉化爲Tensor類型,transforms.ToTensor將數據轉化爲了torch.float32類型且位於[0.0,1.0]的tensor。

1 導入數據集

## 導入數據集
import torch 
import torchvision as tv
from torchvision import transforms
from IPython import display

display.set_matplotlib_formats('svg')

mn_train = tv.datasets.FashionMNIST(root = '~/Datasets/FashionMNIST', train=True, download=True, transform = transforms.ToTensor())
mn_test = tv.datasets.FashionMNIST(root = '~/Datasets/FashionMNIST', train=False, download=True, transform = transforms.ToTensor())

print(len(mn_train), len(mn_test))
print(mn_train[0][0].shape, mn_train[0][1])
device = torch.device('cuda')##在GPU上訓練

這段代碼的輸出是:

60000 10000
torch.Size([1, 28, 28]) 9

從上面的輸出可以看到,數據集是一張張28*28尺寸的圖片,其中9是第一張圖片的標籤。接下來我們看看數據具體長啥樣。

2 查看數據

## 繪圖
%matplotlib inline
import matplotlib.pyplot as plt

labels = ['t-shirt', 'trouser', 'pullover', 'dress', 'coat',
                   'sandal', 'shirt', 'sneaker', 'bag', 'ankle boot']
imgs, y = [], []
for i in range(10):
    imgs.append(mn_train[i][0])
    y.append(mn_train[i][1])

labels = [labels[int(i)] for i in y]

plt.rcParams['figure.figsize'] = (12,12)
_, figs = plt.subplots(1, len(imgs))

for f, img, label in zip(figs, imgs, labels):
    f.imshow(img.view( (28, 28) ).numpy())
    f.set_title(label)
    f.axes.get_xaxis().set_visible(False)
    f.axes.get_yaxis().set_visible(False)
plt.show()

輸出爲:
在這裏插入圖片描述
接下來我們使用DataLoader來構造一個數據讀取器。詳情見這篇博客,這篇博客中有關於如何使用pytorch讀取數據的詳細介紹。

3 讀取數據

##讀取數據
from torch.utils.data import DataLoader
batch_size = 128

train_iter = DataLoader(mn_train, batch_size = 128, shuffle= True, num_workers = 4)
test_iter = DataLoader(mn_test, batch_size = 128, shuffle= False, num_workers = 4)

接下來構造模型。

4 定義模型參數

在這裏,我們構造一個包含兩個隱層的模型。在定義參數的時候,關於device的使用,有個大坑,博主踩過,希望各位以後不要踩到,詳情請看這篇文章

##定義模型參數
import numpy as np

num_inputs, num_hiddens1, num_hiddens2, num_outputs = 784, 128, 128, 10

w1 = torch.tensor(np.random.normal(0, 0.01, size=(num_inputs, num_hiddens1)),
                  dtype=torch.float, device = device, requires_grad=True)
b1 = torch.zeros(num_hiddens1, device = device, requires_grad=True)

w2 = torch.tensor(np.random.normal(0, 0.01, size=(num_hiddens1, num_hiddens2)),
                  dtype=torch.float,device = device, requires_grad=True)
b2 = torch.zeros(num_hiddens2,device = device, requires_grad=True)

w3 = torch.tensor(np.random.normal(0, 0.01, size=(num_hiddens2, num_outputs)), 
                  dtype=torch.float, device = device,requires_grad=True)
b3 = torch.zeros(num_outputs,device = device, requires_grad=True)


params = [w1, b1, w2, b2, w3, b3]

5 構造損失函數,優化方法,激活函數和評價函數

def cross_entropy(y_hat, y, batch_size = batch_size):
    los = - torch.log(y_hat.gather(1, y.view(-1, 1) ) ) / batch_size
    return los 
 
def sgd(params, lr):
    for param in params:
        param.data -= lr * param.grad
        
def relu(x):
    return torch.max(input = x, other = torch.tensor(0.0).to(device))

def evaluate_accuracy(y_hat,y):
    acc = 0
    acc += (y_hat.argmax(dim=1) == y).float().sum().item()
    return acc

6 構造模型

## 構造模型

###定義softmax
def softmax(x):
    x_exp = x.exp()
    partition = x_exp.sum(dim=1, keepdim=True)
    return x_exp / partition  # 這裏應用了廣播機制

def net(x):
    x = x.view(-1, num_inputs)
    h1 = relu(torch.matmul(x,w1) + b1) ###第一個隱層
    h2 = relu(torch.matmul(h1,w2) + b2)
    output = softmax(torch.matmul(h2, w3) + b3) 
    
    return output

7 訓練模型

這裏需要注意一下學習率的設置,過大的學習率將導致模型無法收斂,詳情請見這篇文章

epochs = 20
lr = 0.1

loss = cross_entropy

###訓練模型
for epoch in range(epochs):
    train_ls = 0 ##train loss
    train_acc = 0
    test_acc = 0
    n = 0
    ###訓練集
    for x, y in train_iter:
        x = x.to(device)
        y = y.to(device)
        y_hat = net(x)
        ls = loss(y_hat, y).sum()
        ls.backward()##反向傳播
        sgd(params, lr)
        for param in params:
            param.grad.data.zero_()
        train_ls += ls.item()
        train_acc += evaluate_accuracy(y_hat, y)
        n += y.shape[0]
        
    ##測試集
    n_test = 0
    for x, y in test_iter:
        x = x.to(device)
        y = y.to(device)
        y_hat = net(x)
        test_acc += evaluate_accuracy(y_hat, y)
        n_test += y.shape[0]
    print('epoch: {}, loss: {:.4f}, train acc: {:.4f}, test acc: {:.4f}'.format(epoch+1, train_ls/n, train_acc/n, test_acc/n_test))

輸出爲:

epoch: 1, loss: 0.0117, train acc: 0.4318, test acc: 0.6827
epoch: 2, loss: 0.0055, train acc: 0.7444, test acc: 0.7803
epoch: 3, loss: 0.0043, train acc: 0.8033, test acc: 0.8212
epoch: 4, loss: 0.0037, train acc: 0.8291, test acc: 0.8363
epoch: 5, loss: 0.0034, train acc: 0.8426, test acc: 0.8391
epoch: 6, loss: 0.0032, train acc: 0.8518, test acc: 0.8370
epoch: 7, loss: 0.0030, train acc: 0.8587, test acc: 0.8487
epoch: 8, loss: 0.0029, train acc: 0.8647, test acc: 0.8541
epoch: 9, loss: 0.0028, train acc: 0.8713, test acc: 0.8500
epoch: 10, loss: 0.0027, train acc: 0.8749, test acc: 0.8641
epoch: 11, loss: 0.0026, train acc: 0.8784, test acc: 0.8671
epoch: 12, loss: 0.0025, train acc: 0.8813, test acc: 0.8710
epoch: 13, loss: 0.0025, train acc: 0.8842, test acc: 0.8730
epoch: 14, loss: 0.0024, train acc: 0.8885, test acc: 0.8651
epoch: 15, loss: 0.0023, train acc: 0.8903, test acc: 0.8763
epoch: 16, loss: 0.0023, train acc: 0.8927, test acc: 0.8750
epoch: 17, loss: 0.0022, train acc: 0.8945, test acc: 0.8791
epoch: 18, loss: 0.0022, train acc: 0.8970, test acc: 0.8716
epoch: 19, loss: 0.0021, train acc: 0.8981, test acc: 0.8762
epoch: 20, loss: 0.0021, train acc: 0.8998, test acc: 0.8760

9 保存和加載模型

訓練好了模型自然就要保存起來了,方便以後加載使用。詳情請見我這篇博客>>Pytorch保存和加載模型完全指南: 關於使用Pytorch讀寫模型的一切方法

發佈了25 篇原創文章 · 獲贊 3 · 訪問量 1萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章