Pytorch入門(對比Numpy實現簡單神經網絡)

1.用numpy實現兩層神經網絡

一個全連接ReLU神經網絡,一個隱藏層,沒有bias。用來從x預測y,使用L2 Loss。

  • h=W1Xh = W_1X
  • a=max(0,h)a = max(0, h)
  • yhat=W2ay_{hat} = W_2a

這一實現完全使用numpy來計算前向神經網絡,loss,和反向傳播。

  • forward pass
  • loss
  • backward pass

numpy ndarray是一個普通的n維array。它不知道任何關於深度學習或者梯度(gradient)的知識,也不知道計算圖(computation graph),只是一種用來計算數學運算的數據結構。

N, D_in, H, D_out = 64, 1000, 100, 10

# 隨機創建一些訓練數據
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)

w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)

learning_rate = 1e-6
for it in range(500):
    # Forward pass
    h = x.dot(w1) # N * H
    h_relu = np.maximum(h, 0) # N * H
    y_pred = h_relu.dot(w2) # N * D_out
    
    # compute loss
    loss = np.square(y_pred - y).sum()
    print(it, loss)
    
    # Backward pass
    # compute the gradient
    grad_y_pred = 2.0 * (y_pred - y)
    grad_w2 = h_relu.T.dot(grad_y_pred)
    grad_h_relu = grad_y_pred.dot(w2.T)
    grad_h = grad_h_relu.copy()
    grad_h[h<0] = 0
    grad_w1 = x.T.dot(grad_h)
    
    # update weights of w1 and w2
    w1 -= learning_rate * grad_w1
    w2 -= learning_rate * grad_w2

2.PyTorch: Tensors

這次我們使用PyTorch tensors來創建前向神經網絡,計算損失,以及反向傳播。

一個PyTorch Tensor很像一個numpy的ndarray。但是它和numpy ndarray最大的區別是,PyTorch Tensor可以在CPU或者GPU上運算。如果想要在GPU上運算,就需要把Tensor換成cuda類型。

N, D_in, H, D_out = 64, 1000, 100, 10

# 隨機創建一些訓練數據
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

w1 = torch.randn(D_in, H)
w2 = torch.randn(H, D_out)

learning_rate = 1e-6
for it in range(500):
    # Forward pass
    h = x.mm(w1) # N * H
    h_relu = h.clamp(min=0) # N * H
    y_pred = h_relu.mm(w2) # N * D_out
    
    # compute loss
    loss = (y_pred - y).pow(2).sum().item()
    print(it, loss)
    
    # Backward pass
    # compute the gradient
    grad_y_pred = 2.0 * (y_pred - y)
    grad_w2 = h_relu.t().mm(grad_y_pred)
    grad_h_relu = grad_y_pred.mm(w2.t())
    grad_h = grad_h_relu.clone()
    grad_h[h<0] = 0
    grad_w1 = x.t().mm(grad_h)
    
    # update weights of w1 and w2
    w1 -= learning_rate * grad_w1
    w2 -= learning_rate * grad_w2

3.PyTorch: Tensor和autograd

PyTorch的一個重要功能就是autograd,也就是說只要定義了forward pass(前向神經網絡),計算了loss之後,PyTorch可以自動求導計算模型所有參數的梯度。

一個PyTorch的Tensor表示計算圖中的一個節點。如果x是一個Tensor並且x.requires_grad=True那麼x.grad是另一個儲存着x當前梯度(相對於一個scalar,常常是loss)的向量。

N, D_in, H, D_out = 64, 1000, 100, 10

# 隨機創建一些訓練數據
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

w1 = torch.randn(D_in, H, requires_grad=True)
w2 = torch.randn(H, D_out, requires_grad=True)

learning_rate = 1e-6
for it in range(500):
    # Forward pass
    y_pred = x.mm(w1).clamp(min=0).mm(w2)
    
    # compute loss
    loss = (y_pred - y).pow(2).sum() # computation graph
    print(it, loss.item())
    
    # Backward pass
    loss.backward()
    
    # update weights of w1 and w2
    with torch.no_grad():
        w1 -= learning_rate * w1.grad
        w2 -= learning_rate * w2.grad
        w1.grad.zero_()
        w2.grad.zero_()

4.PyTorch: nn

這次我們使用PyTorch中nn這個庫來構建網絡。
用PyTorch autograd來構建計算圖和計算gradients,
然後PyTorch會幫我們自動計算gradient。

import torch.nn as nn

N, D_in, H, D_out = 64, 1000, 100, 10

# 隨機創建一些訓練數據
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

model = torch.nn.Sequential(
    torch.nn.Linear(D_in, H, bias=False), # w_1 * x + b_1
    torch.nn.ReLU(),
    torch.nn.Linear(H, D_out, bias=False),
)

torch.nn.init.normal_(model[0].weight)
torch.nn.init.normal_(model[2].weight)

# model = model.cuda()

loss_fn = nn.MSELoss(reduction='sum')

learning_rate = 1e-6
for it in range(500):
    # Forward pass
    y_pred = model(x) # model.forward() 
    
    # compute loss
    loss = loss_fn(y_pred, y) # computation graph
    print(it, loss.item())
    
    # Backward pass
    loss.backward()
    
    # update weights of w1 and w2
    with torch.no_grad():
        for param in model.parameters(): # param (tensor, grad)
            param -= learning_rate * param.grad
            
    model.zero_grad()

5.PyTorch: optim

這一次我們不再手動更新模型的weights,而是使用optim這個包來幫助我們更新參數。
optim這個package提供了各種不同的模型優化方法,包括SGD+momentum, RMSProp, Adam等等。

import torch.nn as nn

N, D_in, H, D_out = 64, 1000, 100, 10

# 隨機創建一些訓練數據
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

model = torch.nn.Sequential(
    torch.nn.Linear(D_in, H, bias=False), # w_1 * x + b_1
    torch.nn.ReLU(),
    torch.nn.Linear(H, D_out, bias=False),
)

torch.nn.init.normal_(model[0].weight)
torch.nn.init.normal_(model[2].weight)

# model = model.cuda()

loss_fn = nn.MSELoss(reduction='sum')
# learning_rate = 1e-4
# optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

learning_rate = 1e-6
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

for it in range(500):
    # Forward pass
    y_pred = model(x) # model.forward() 
    
    # compute loss
    loss = loss_fn(y_pred, y) # computation graph
    print(it, loss.item())

    optimizer.zero_grad()
    # Backward pass
    loss.backward()
    
    # update model parameters
    optimizer.step()

6.PyTorch: 自定義 nn Modules

我們可以定義一個模型,這個模型繼承自nn.Module類。如果需要定義一個比Sequential模型更加複雜的模型,就需要定義nn.Module模型。

import torch.nn as nn

N, D_in, H, D_out = 64, 1000, 100, 10

# 隨機創建一些訓練數據
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

class TwoLayerNet(torch.nn.Module):
    def __init__(self, D_in, H, D_out):
        super(TwoLayerNet, self).__init__()
        # define the model architecture
        self.linear1 = torch.nn.Linear(D_in, H, bias=False)
        self.linear2 = torch.nn.Linear(H, D_out, bias=False)
    
    def forward(self, x):
        y_pred = self.linear2(self.linear1(x).clamp(min=0))
        return y_pred

model = TwoLayerNet(D_in, H, D_out)
loss_fn = nn.MSELoss(reduction='sum')
learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

for it in range(500):
    # Forward pass
    y_pred = model(x) # model.forward() 
    
    # compute loss
    loss = loss_fn(y_pred, y) # computation graph
    print(it, loss.item())

    optimizer.zero_grad()
    # Backward pass
    loss.backward()
    
    # update model parameters
    optimizer.step()
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章