Pytorch學習(二)定義神經網絡

Pytorch中定義神經網絡

深度學習使用人工神經網絡(模型),它是由許多層相互連接的單元組成的計算系統。通過將數據傳遞給這些相互連接的單元,神經網絡能夠學習如何近似計算將輸入轉換成輸出。在PyTorch中,神經網絡能夠使用torch.nn包來構建。

介紹

Pytorch提供了優雅設計的模塊和類,包括torch.nn,幫助我們創建和訓練神經網絡。一個nn.Module 包括layers,和forward(input)方法,然後返回一個輸出。

步驟

1. 導入包

2. 定義和初始化神經網絡

3. 指定數據如何貫穿模型

4. 測試

1. Import necessary libraries for loading our data

import torch
import torch.nn as nn
import torch.nn.functional as F

2. Define and intialize the neural network

我們的網絡可以識別圖像。我們將使用PyTorch內置的一個叫做卷積的過程。卷積是將圖像中的每個元素與它的局部鄰居相加,由一個核函數或一個小的martrix加權,幫助我們從輸入圖像中提取某些特徵(如邊緣檢測、銳度、模糊度等)。

class Net(nn.Module):
    def __init__(self):
        super(Net,self).__init__()
        # First 2D convolutional layer, taking in 1 input channel (image),
        # outputting 32 convolutional features, with a square kernel size of 3
        self.conv1 = nn.Conv2d(1, 32, 3, 1)
        # Second 2D convolutional layer, taking in the 32 input layers,
        # outputting 64 convolutional features, with a square kernel size of 3
        self.conv2 = nn.Conv2d(32, 64 ,3 , 1)
        # Designed to ensure that adjacent pixels are either all 0s or all active
        # with an input probability
        self.dropout1 = nn.Dropout2d(0.25)
        self.dropout2 = nn.Dropout2d(0.5)

        #First fully connected layer
        self.fc1 = nn.Linear(9216, 128)
        #Second fully connected layer that outputs our labels
        self.fc2 = nn.Linear(128, 10)
my_nn = Net()
print(my_nn)
        

3. Specify how data will pass through your model

當您使用PyTorch構建模型時,您只需定義forward函數,它將數據傳遞到計算圖(即我們的神經網絡)。這代表了我們的前饋(feed-forward)算法。

class Net(nn.Module):
    def __init__(self):
      super(Net, self).__init__()
      self.conv1 = nn.Conv2d(1, 32, 3, 1)
      self.conv2 = nn.Conv2d(32, 64, 3, 1)
      self.dropout1 = nn.Dropout2d(0.25)
      self.dropout2 = nn.Dropout2d(0.5)
      self.fc1 = nn.Linear(9216, 128)
      self.fc2 = nn.Linear(128, 10)

    # x represents our data
    def forward(self, x):
        # Pass data through conv1
        x = self.conv1(x)
        # Use the rectified-linear activation function over x
        x = F.relu(x)
        x = self.conv2(x)
        x = F.relu(x)
        
        # Run max pooling over x
        x = F.max_pool2d(x,2)
        # Pass data through dropout1
        x = self.dropout1(x)
        # Flatten x with start_dim=1
        x = torch.flatten(x, 1)
        # Pass data through fc1
        x = self.fc1(x)
        x = F.relu(x)
        x = self.dropout2(x)
        x = self.fc2(x)

        # Apply softmax to x
        output = F.log_softmax(x, dim=1)
        return output

4. [Optional] Pass data through your model to test

爲了確保我們收到我們想要的輸出,讓我們通過傳遞一些隨機數據來測試我們的模型。

# Equates to one random 28x28 image
random_data = torch.rand((1,1,28,28))

my_nn = Net()
result = my_nn(random_data)
print(result)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章