2 PyTorch的autograd

%matplotlib inline

Autograd: 自動求導

autograd是PyTorchs搭建神經網絡的核心

Tensor

自動求導的過程:

將Tensor的屬性 .requires_grad設置爲True
Tensor的計算過程中,Tensor的.grad_fn屬性將自動記錄(用戶創建的Tensor該屬性爲None)
完成計算後,可以調用計算結果的標量.backward()自動計算所有梯度
該張量的梯度將累加到.grad屬性中

import torch

創建一個張量並設置requires_grad=True爲跟蹤張量

x = torch.ones(2, 2, requires_grad=True)
print(x)
tensor([[1., 1.],
        [1., 1.]], requires_grad=True)

進行張量運算:

y = x + 2
print(y)
tensor([[3., 3.],
        [3., 3.]], grad_fn=<AddBackward0>)

y是由於操作而創建的,因此具有grad_fn

print(y.grad_fn)
<AddBackward0 object at 0x0000020B017AE978>
z = y * y * 3
out = z.mean()

print(z, out)
tensor([[27., 27.],
        [27., 27.]], grad_fn=<MulBackward0>) tensor(27., grad_fn=<MeanBackward0>)

.requires_grad_( ... ) 就地更改現有Tensor的標誌。如果未給出輸入標誌,則默認爲False

a = torch.randn(2, 2)
a = ((a * 3) / (a - 1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)
False
True
<SumBackward0 object at 0x0000020B017B3940>

梯度

因爲out包含單個標量,out.backward()所以等效於out.backward(torch.tensor(1.))。

out.backward()

打印漸變 d(out)dx\dfrac{d(out)}{dx}

print(x.grad)
tensor([[4.5000, 4.5000],
        [4.5000, 4.5000]])

o=14izio = \dfrac{1}{4}\sum_i z_i
zi=3(xi+2)2z_i = 3(x_i+2)^2 and zixi=1=27z_i\bigr\rvert_{x_i=1} = 27.

oxi=32(xi+2)\dfrac{\partial o}{\partial x_i} = \dfrac{3}{2}(x_i+2), hence
oxixi=1=92=4.5\dfrac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \dfrac{9}{2} = 4.5.

如果具有向量值函數 y=f(x)\vec{y}=f(\vec{x}),
雅可比矩陣:

\begin{align}J=\left(\begin{array}{ccc}
\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\
\vdots & \ddots & \vdots\
\frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
\end{array}\right)\end{align}

torch.autograd 用於計算向量雅可比積的引擎。
給定任何向量v=(v1v2vm)Tv=\left(\begin{array}{cccc} v_{1} & v_{2} & \cdots & v_{m}\end{array}\right)^{T},
計算 vTJv^{T}\cdot J.
l=g(y)l=g\left(\vec{y}\right),
v=(ly1lym)Tv=\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right)^{T},

\begin{align}J^{T}\cdot v=\left(\begin{array}{ccc}
\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}}\
\vdots & \ddots & \vdots\
\frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
\end{array}\right)\left(\begin{array}{c}
\frac{\partial l}{\partial y_{1}}\
\vdots\
\frac{\partial l}{\partial y_{m}}
\end{array}\right)=\left(\begin{array}{c}
\frac{\partial l}{\partial x_{1}}\
\vdots\
\frac{\partial l}{\partial x_{n}}
\end{array}\right)\end{align}

矢量-雅可比積的這一特徵使得將外部梯度輸入具有非標量輸出的模型變得非常方便。

x = torch.randn(3, requires_grad=True)

y = x * 2
while y.data.norm() < 1000:
    y = y * 2

print(y)
tensor([971.0132, 473.3496, 795.3892], grad_fn=<MulBackward0>)

\color{#FF0000}{注意}

此時y不是標量,若直接計算y.backward(),會出現RuntimeError: grad can be implicitly created only for scalar outputs錯誤。
對於y這種張量,要想求對應的雅可比矩陣,可以在y.backward()直接傳入一個v參數。
v = torch.tensor([1,1,1], dtype=torch.float),可求出對應的雅可比矩陣。
這也是爲什麼y.backward()相當於y.backward(torch.tensor([1])

v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)

print(x.grad)
tensor([1.0240e+02, 1.0240e+03, 1.0240e-01])

with torch.no_grad():區塊裏,會自動使.requires_grad=True失效

print(x.requires_grad)
print((x ** 2).requires_grad)

with torch.no_grad():
	print((x ** 2).requires_grad)
True
True
False

.detach() 創建新具有相同內容的Tensor,使其.requires_grad=False:

print(x.requires_grad)
y = x.detach()
print(y.requires_grad)
print(x.eq(y).all())
True
False
tensor(True)

閱讀:

有關文檔autograd.Function,請訪問
https://pytorch.org/docs/stable/autograd.html#function

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章