python/pytorch 基礎
https://www.cnblogs.com/nickchen121
培訓機構(Django 類似於 Transformers)
首先由一個 norm 函數
norm 裏面做殘差,會輸入( x 和 淡粉色z1,殘差值),輸出一個值紫粉色的 z1
標準化
\(E(x)\) 對 x 求均值
\(Var(x)\) 對 x 求方差
\(\epsilon\) 加在方差上的數字,避免分母爲0;
\(\gamma\)和\(\beta\) 爲學習參數,二者均可學習隨着訓練過程而變化;
class LayerNorm(nn.Module):
def __init__(self, feature, eps=1e-6):
"""
:param feature: self-attention 的 x 的大小
:param eps:
"""
super(LayerNorm, self).__init__()
self.a_2 = nn.Parameter(torch.ones(feature))
self.b_2 = nn.Parameter(torch.zeros(feature))
self.eps = eps
def forward(self, x):
mean = x.mean(-1, keepdim=True)
std = x.std(-1, keepdim=True)
return self.a_2 * (x - mean) / (std + self.eps) + self.b_2
殘差+標準化
class SublayerConnection(nn.Module):
"""
這不僅僅做了殘差,這是把殘差和 layernorm 一起給做了
"""
def __init__(self, size, dropout=0.1):
super(SublayerConnection, self).__init__()
# 第一步做 layernorm
self.layer_norm = LayerNorm(size)
# 第二步做 dropout
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, sublayer):
"""
:param x: 就是self-attention的輸入
:param sublayer: self-attention層
:return:
"""
return self.dropout(self.layer_norm(x + sublayer(x)))