Pytorch實現MNIST(附SGD、Adam、AdaBound不同優化器下的訓練比較)

  學習工具最快的方法就是在使用的過程中學習,也就是在工作中(解決實際問題中)學習。文章結尾處附完整代碼。

一、數據準備

  在Pytorch中提供了MNIST的數據,因此我們只需要使用Pytorch提供的數據即可。

from torchvision import datasets, transforms
# batch_size 是指每次送入網絡進行訓練的數據量
batch_size = 64
# MNIST Dataset
# MNIST數據集已經集成在pytorch datasets中,可以直接調用
train_dataset = datasets.MNIST(root='./data/',
                               train=True,
                               transform=transforms.ToTensor(),
                               download=True)

test_dataset = datasets.MNIST(root='./data/',
                              train=False,
                              transform=transforms.ToTensor())

train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
                                           batch_size=batch_size,
                                           shuffle=True)

test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
                                          batch_size=batch_size,
                                          shuffle=False)


二、建立網絡

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # 輸入1通道,輸出10通道,kernel 5*5
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=10, kernel_size=5)
        # 輸入10通道,輸出20通道,kernel 5*5
        self.conv2 = nn.Conv2d(10, 20, 5)
        # 輸入20通道,輸出40通道,kernel 3*3
        self.conv3 = nn.Conv2d(20, 40, 3)
		# 2*2的池化層
        self.mp = nn.MaxPool2d(2)
        # 全連接層(輸入特徵數,輸出)
        self.fc = nn.Linear(40, 10)

    def forward(self, x):
        # in_size = 64
        # one batch 此時的x是包含batchsize維度爲4的tensor,即(batchsize,channels,x,y)
        # x.size(0)指batchsize的值,把batchsize的值作爲網絡的in_size
        in_size = x.size(0)
        # x: 64*1*28*28
        x = F.relu(self.mp(self.conv1(x)))
        # x: 64*10*12*12  (n+2p-f)/s + 1 = 28 - 5 + 1 = 24,所以在沒有池化的時候是24*24,池化層爲2*2 ,所以池化之後爲12*12
        x = F.relu(self.mp(self.conv2(x)))
        # x: 64*20*4*4 同理,沒有池化的時候是12 - 5 + 1 = 8 ,池化後爲4*4
        x = F.relu(self.mp(self.conv3(x)))
        # 輸出x : 64*40*2*2

        x = x.view(in_size, -1)  # 平鋪 tensor 相當於resharp
        # print(x.size())
        # x: 64*320
        x = self.fc(x)
        # x: 64*10
        # print(x.size())
        return F.log_softmax(x)  #64*10


三、開始訓練

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

model = Net()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


def train(epoch):
    # enumerate()枚舉、列舉,對於一個可迭代/遍歷的對象,enumerate將其組成一個索引序列,利用它可以同時獲得索引和值
    for batch_idx, (data, target) in enumerate(train_loader): #batch_idx是enumerate()函數自帶的索引,從0開始
        # data.size():[64, 1, 28, 28]
        # target.size():[64]
        output = model(data)
        # output:64*10
        loss = F.nll_loss(output, target)
        # 每200次,輸出一次數據
        if batch_idx % 200 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.
                  format(
                epoch,
                batch_idx * len(data),
                len(train_loader.dataset),
                100. * batch_idx / len(train_loader),
                loss.item()))

        optimizer.zero_grad()   # 所有參數的梯度清零
        loss.backward()         # 即反向傳播求梯度
        optimizer.step()        # 調用optimizer進行梯度下降更新參數
        
# 實驗入口
for epoch in range(1, 10):
    train(epoch)

  對於訓練中的一些參數解釋如下:

  • batch_idx:batch的索引,即batch的數量。
  • batch_size:每次送入網絡的數據量

四、測試模型


def test():
    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data, target = Variable(data, volatile=True), Variable(target)
        output = model(data.cuda())
        # 累加loss
        test_loss += F.nll_loss(output, target.cuda(), size_average=False).item()
        # get the index of the max log-probability
        # 找出每列(索引)概率意義下的最大值
        pred = output.data.max(1, keepdim=True)[1]
        # print(pred)
        correct += pred.eq(target.data.view_as(pred).cuda()).cuda().sum()

    test_loss /= len(test_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))

# 實驗入口
for epoch in range(1, 10):
    print("test num"+str(epoch))
    train(epoch)
    test()

在此展示結果的最後一次:
在這裏插入圖片描述
可以看到,我們使用SGD作爲優化器的優化函數時,測試集後來達到的正確率爲92%


五、提高測試集正確率

5.1 增加訓練次數(SGD版)

  如題,我們將訓練次數增至30,優化函數仍然使用SGD,即只在入口循環處改變epoch的取值範圍。爲了節省空間,結果只輸出測試集loss和正確率。
  我們可以看到,在num18時,測試集的正確率收斂,達到了96%;loss也是在0.1附近波動。

Test set: Average loss: 2.2955, Accuracy: 1018/10000 (10%)

test num1

Test set: Average loss: 2.2812, Accuracy: 2697/10000 (26%)

test num2

Test set: Average loss: 2.2206, Accuracy: 3862/10000 (38%)

test num3

Test set: Average loss: 1.8014, Accuracy: 6100/10000 (61%)

test num4

Test set: Average loss: 0.7187, Accuracy: 8049/10000 (80%)

test num5

Test set: Average loss: 0.4679, Accuracy: 8593/10000 (85%)

test num6

Test set: Average loss: 0.3685, Accuracy: 8898/10000 (88%)

test num7

Test set: Average loss: 0.3006, Accuracy: 9108/10000 (91%)

test num8

Test set: Average loss: 0.2713, Accuracy: 9177/10000 (91%)

test num9

Test set: Average loss: 0.2343, Accuracy: 9270/10000 (92%)

test num10

Test set: Average loss: 0.2071, Accuracy: 9370/10000 (93%)

test num11

Test set: Average loss: 0.1910, Accuracy: 9413/10000 (94%)

test num12

Test set: Average loss: 0.1783, Accuracy: 9453/10000 (94%)

test num13

Test set: Average loss: 0.1612, Accuracy: 9482/10000 (94%)

test num14

Test set: Average loss: 0.1603, Accuracy: 9497/10000 (94%)

test num15

Test set: Average loss: 0.1522, Accuracy: 9526/10000 (95%)

test num16

Test set: Average loss: 0.1410, Accuracy: 9555/10000 (95%)

test num17

Test set: Average loss: 0.1338, Accuracy: 9573/10000 (95%)

test num18

Test set: Average loss: 0.1307, Accuracy: 9588/10000 (95%)

test num19

Test set: Average loss: 0.1212, Accuracy: 9610/10000 (96%)

test num20

Test set: Average loss: 0.1232, Accuracy: 9622/10000 (96%)

test num21

Test set: Average loss: 0.1149, Accuracy: 9646/10000 (96%)

test num22

Test set: Average loss: 0.1104, Accuracy: 9652/10000 (96%)

test num23

Test set: Average loss: 0.1072, Accuracy: 9668/10000 (96%)

test num24

Test set: Average loss: 0.1113, Accuracy: 9646/10000 (96%)

test num25

Test set: Average loss: 0.1037, Accuracy: 9659/10000 (96%)

test num26

Test set: Average loss: 0.0970, Accuracy: 9700/10000 (97%)

test num27

Test set: Average loss: 0.1013, Accuracy: 9692/10000 (96%)

test num28

Test set: Average loss: 0.1015, Accuracy: 9675/10000 (96%)

test num29

Test set: Average loss: 0.0952, Accuracy: 9711/10000 (97%)

test num30

Test set: Average loss: 0.0885, Accuracy: 9727/10000 (97%)


Process finished with exit code 0

5.2 Adam版(訓練次數:30)

  Adam還是名副其實的老大,第一次就已經達到了SGD收斂時候的loss值和正確率。我們可以看到,在num26時,Adam優化函數下的模型對於測試集的預測正確率達到了99%,loss爲0.0397,但是正確率似乎並沒有收斂到99%。

test num1

Test set: Average loss: 0.1108, Accuracy: 9660/10000 (96%)

test num2

Test set: Average loss: 0.0932, Accuracy: 9709/10000 (97%)

test num3

Test set: Average loss: 0.0628, Accuracy: 9800/10000 (98%)

test num4

Test set: Average loss: 0.0562, Accuracy: 9813/10000 (98%)

test num5

Test set: Average loss: 0.0478, Accuracy: 9832/10000 (98%)

test num6

Test set: Average loss: 0.0442, Accuracy: 9850/10000 (98%)

test num7

Test set: Average loss: 0.0386, Accuracy: 9863/10000 (98%)

test num8

Test set: Average loss: 0.0768, Accuracy: 9753/10000 (97%)

test num9

Test set: Average loss: 0.0343, Accuracy: 9879/10000 (98%)

test num10

Test set: Average loss: 0.0347, Accuracy: 9877/10000 (98%)

test num11

Test set: Average loss: 0.0494, Accuracy: 9825/10000 (98%)

test num12

Test set: Average loss: 0.0571, Accuracy: 9811/10000 (98%)

test num13

Test set: Average loss: 0.0342, Accuracy: 9887/10000 (98%)

test num14

Test set: Average loss: 0.0400, Accuracy: 9870/10000 (98%)

test num15

Test set: Average loss: 0.0339, Accuracy: 9889/10000 (98%)

test num16

Test set: Average loss: 0.0371, Accuracy: 9889/10000 (98%)

test num17

Test set: Average loss: 0.0402, Accuracy: 9872/10000 (98%)

test num18

Test set: Average loss: 0.0434, Accuracy: 9887/10000 (98%)

test num19

Test set: Average loss: 0.0377, Accuracy: 9877/10000 (98%)

test num20

Test set: Average loss: 0.0402, Accuracy: 9883/10000 (98%)

test num21

Test set: Average loss: 0.0407, Accuracy: 9886/10000 (98%)

test num22

Test set: Average loss: 0.0482, Accuracy: 9871/10000 (98%)

test num23

Test set: Average loss: 0.0414, Accuracy: 9891/10000 (98%)

test num24

Test set: Average loss: 0.0407, Accuracy: 9890/10000 (98%)

test num25

Test set: Average loss: 0.0403, Accuracy: 9898/10000 (98%)

test num26

Test set: Average loss: 0.0397, Accuracy: 9902/10000 (99%)

test num27

Test set: Average loss: 0.0491, Accuracy: 9873/10000 (98%)

test num28

Test set: Average loss: 0.0416, Accuracy: 9896/10000 (98%)

test num29

Test set: Average loss: 0.0450, Accuracy: 9897/10000 (98%)

test num30

Test set: Average loss: 0.0500, Accuracy: 9875/10000 (98%)


Process finished with exit code 0

5.3 AdaBound版(訓練次數:30)

  AdaBound即最近北大、浙大本科生新提出的訓練速度比肩Adam,性能媲美SGD的優化算法。
  可以看到,在num4、num5時就正確率已經達到了98%,loss已經比Adam收斂時候的loss低。而在num8的時候,正確率突破99%!loss達到了0.0303!,在接下來的幾次訓練中,正確率和loss有細微的波動,但是隨着訓練次數的增加,正確率和loss還是達到了最佳的收斂值,波動並不是特別大。

test num1

Test set: Average loss: 0.1239, Accuracy: 9614/10000 (96%)

test num2

Test set: Average loss: 0.0965, Accuracy: 9704/10000 (97%)

test num3

Test set: Average loss: 0.0637, Accuracy: 9794/10000 (97%)

test num4

Test set: Average loss: 0.0485, Accuracy: 9852/10000 (98%)

test num5

Test set: Average loss: 0.0403, Accuracy: 9870/10000 (98%)

test num6

Test set: Average loss: 0.0513, Accuracy: 9836/10000 (98%)

test num7

Test set: Average loss: 0.0446, Accuracy: 9856/10000 (98%)

test num8

Test set: Average loss: 0.0303, Accuracy: 9910/10000 (99%)

test num9

Test set: Average loss: 0.0411, Accuracy: 9873/10000 (98%)

test num10

Test set: Average loss: 0.0422, Accuracy: 9870/10000 (98%)

test num11

Test set: Average loss: 0.0319, Accuracy: 9894/10000 (98%)

test num12

Test set: Average loss: 0.0303, Accuracy: 9905/10000 (99%)

test num13

Test set: Average loss: 0.0338, Accuracy: 9897/10000 (98%)

test num14

Test set: Average loss: 0.0313, Accuracy: 9904/10000 (99%)

test num15

Test set: Average loss: 0.0285, Accuracy: 9920/10000 (99%)

test num16

Test set: Average loss: 0.0319, Accuracy: 9917/10000 (99%)

test num17

Test set: Average loss: 0.0427, Accuracy: 9884/10000 (98%)

test num18

Test set: Average loss: 0.0351, Accuracy: 9894/10000 (98%)

test num19

Test set: Average loss: 0.0337, Accuracy: 9897/10000 (98%)

test num20

Test set: Average loss: 0.0321, Accuracy: 9910/10000 (99%)

test num21

Test set: Average loss: 0.0354, Accuracy: 9908/10000 (99%)

test num22

Test set: Average loss: 0.0332, Accuracy: 9905/10000 (99%)

test num23

Test set: Average loss: 0.0347, Accuracy: 9904/10000 (99%)

test num24

Test set: Average loss: 0.0362, Accuracy: 9906/10000 (99%)

test num25

Test set: Average loss: 0.0402, Accuracy: 9900/10000 (99%)

test num26

Test set: Average loss: 0.0380, Accuracy: 9900/10000 (99%)

test num27

Test set: Average loss: 0.0378, Accuracy: 9914/10000 (99%)

test num28

Test set: Average loss: 0.0356, Accuracy: 9913/10000 (99%)

test num29

Test set: Average loss: 0.0360, Accuracy: 9912/10000 (99%)


Process finished with exit code 0

六、完整代碼

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
import adabound

# Training settings
batch_size = 64

# TODO dataset 和 dataloader
# MNIST Dataset
# MNIST數據集已經集成在pytorch datasets中,可以直接調用
train_dataset = datasets.MNIST(root='./data/',
                               train=True,
                               transform=transforms.ToTensor(),
                               download=True)

test_dataset = datasets.MNIST(root='./data/',
                              train=False,
                              transform=transforms.ToTensor())

# Data Loader (Input Pipeline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
                                           batch_size=batch_size,
                                           shuffle=True)

test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
                                          batch_size=batch_size,
                                          shuffle=False)


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # 輸入1通道,輸出10通道,kernel 5*5
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=10, kernel_size=5)
        self.conv2 = nn.Conv2d(10, 20, 5)
        self.conv3 = nn.Conv2d(20, 40, 3)

        self.mp = nn.MaxPool2d(2)
        # fully connect
        self.fc = nn.Linear(40, 10)#(in_features, out_features)

    def forward(self, x):
        # in_size = 64
        # one batch 此時的x是包含batchsize維度爲4的tensor,即(batchsize,channels,x,y)
        # x.size(0)指batchsize的值    把batchsize的值作爲網絡的in_size
        in_size = x.size(0)
        # x: 64*1*28*28
        x = F.relu(self.mp(self.conv1(x)))
        # x: 64*10*12*12  (n+2p-f)/s + 1 = 28 - 5 + 1 = 24,所以在沒有池化的時候是24*24,池化層爲2*2 ,所以池化之後爲12*12
        x = F.relu(self.mp(self.conv2(x)))
        # x: 64*20*4*4 同理,沒有池化的時候是12 - 5 + 1 = 8 ,池化後爲4*4
        x = F.relu(self.mp(self.conv3(x)))
        # 輸出x : 64*40*2*2

        x = x.view(in_size, -1)  # 平鋪 tensor 相當於resharp
        # print(x.size())
        # x: 64*320
        x = self.fc(x)
        # x:64*10
        # print(x.size())
        return F.log_softmax(x)  #64*10


model = Net()
model.cuda()
# optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.5)
# optimizer = adabound.AdaBound(model.parameters(), lr=1e-3, final_lr=0.1)
optimizer = optim.Adam(model.parameters(), lr=0.001)

def train(epoch):
    # enumerate()枚舉、列舉,對於一個可迭代/遍歷的對象,enumerate將其組成一個索引序列,利用它可以同時獲得索引和值
    for batch_idx, (data, target) in enumerate(train_loader):  # batch_idx是enumerate()函數自帶的索引,從0開始
        # data.size():[64, 1, 28, 28]
        # target.size():[64]
        output = model(data.cuda())
        # print(batch_idx)
        # output:64*10
        loss = F.nll_loss(output, target.cuda())
        # 每200次,輸出一次數據
        # if batch_idx % 200 == 0:
        #     print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.
        #           format(
        #         epoch,
        #         batch_idx * len(data),
        #         len(train_loader.dataset),
        #         100. * batch_idx / len(train_loader),
        #         loss.item()))

        optimizer.zero_grad()   # 所有參數的梯度清零
        loss.backward()         #即反向傳播求梯度
        optimizer.step()        #調用optimizer進行梯度下降更新參數


def test():
    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data, target = Variable(data, volatile=True), Variable(target)
        output = model(data.cuda())
        # 累加loss
        test_loss += F.nll_loss(output, target.cuda(), size_average=False).item()
        # get the index of the max log-probability
        # 找出每列(索引)概率意義下的最大值
        pred = output.data.max(1, keepdim=True)[1]
        # print(pred)
        correct += pred.eq(target.data.view_as(pred).cuda()).cuda().sum()

    test_loss /= len(test_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))


for epoch in range(1, 30):
    print("test num"+str(epoch))
    train(epoch)
    test()

七、參考資料及獲得的幫助

  • AdaBound詳解【首發】
  • 完成本次實驗得到了何樹林同學的大力支持
  • 本次實驗的代碼在網上參考修改,由於不慎關閉了相關頁面……找不到出處了,如果有雷同,請及時告訴我,以便在此聲明參考出處。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章