深度學習中各種不同Normalization的對比及代碼實現

本文介紹深度學習中常用的四種Normalization方法,Batch Normalization、Layer Normalization、Instance Normalization和Group Normalization。主要結合代碼分析其計算過程

總結

對於輸入大小爲NxCxHxW的特徵

  • BN對所有樣本的每個通道進行歸一化 [均值形狀爲C]
  • LN對每個樣本進行歸一化 [均值形狀爲N]
  • IN對每個樣本的每個通道進行歸一化 [均值形狀爲NC]
  • GN對每個樣本的部分通道進行歸一化(先將通道分爲G組) [均值形狀爲NG]
    在這裏插入圖片描述

BatchNormalization

BN對NHW進行歸一化,保留C維度,對較小的batch_size效果不好
在這裏插入圖片描述

def BatchNormalization(x):
    # x: [NxCxHxW]
    mean, std = mean_std(x, dim=[0,2,3], keepdim=True)
    x = (x - mean) / std
    return x
# track_running_stats=False,求當前 batch 真實平均值和標準差,而不是更新全局平均值和標準差
# affine=False, 只做歸一化,不乘以 gamma 加 beta(通過訓練才能確定)
# num_features 爲 feature map 的 channel 數目
bn = nn.BatchNorm2d(num_features=20, affine=False, track_running_stats=False)

LayerNormalization

LN對CHW進行歸一化,保留N維度
在這裏插入圖片描述

def LayerNormalization(x):
    # x: [NxCxHxW]
    mean, std = mean_std(x, dim=[1,2,3], keepdim=True)
    x = (x - mean) / std
    return x
# elementwise_affine=False 不作映射
# 這裏的映射和 BN 以及下文的 IN 有區別,它是 elementwise 的 affine,
# 即 gamma 和 beta 不是 channel 維的向量,而是維度等於 normalized_shape 的矩陣
ln = nn.LayerNorm(normalized_shape=[20, 5, 5], elementwise_affine=False)

InstaceNormalization

IN對HW進行歸一化,保留NC維度
在這裏插入圖片描述

def InstanceNormalization(x):
    # x: [NxCxHxW]
    mean, std = mean_std(x, dim=[2,3], keepdim=True)
    x = (x - mean) / std
    return x
# track_running_stats=False,求當前 batch 真實平均值和標準差,而不是更新全局平均值和標準差
# affine=False, 只做歸一化,不乘以 gamma 加 beta(通過訓練才能確定)
# num_features 爲 feature map 的 channel 數目
In = nn.InstanceNorm2d(num_features=20, affine=False, track_running_stats=False)

GroupNormalization

GN對channel先進行分組,再進行歸一化,是LN和IN的折中
在這裏插入圖片描述

# [NxCxHxW] -> [NxGx(C//G)xHxW], 再對(C//G)HW進行歸一化,保留NG維度
def GroupNormalization(x, num_groups):
    # x: [NxCxHxW]
    size = x.size()
    x = x.view(size[0], num_groups, -1, size[2], size[3])
    mean, std = mean_std(x, dim=[2,3,4], keepdim=True)
    x = (x - mean) / std
    x = x.view(size)
    return x
# 分成 4 個 group
gn = nn.GroupNorm(num_groups=4, num_channels=20, affine=False)

測試代碼如下:

x = torch.rand(10, 20, 5, 5)
official_bn = bn(x)
my_bn = BatchNormalization(x)
print("BatchNormalization diff: %f" % torch.sum(torch.abs(official_bn-my_bn)))

official_ln = ln(x)
my_ln = LayerNormalization(x)
print("LayerNormalization diff: %f" % torch.sum(torch.abs(official_ln-my_ln)))

official_in = In(x)
my_in = InstanceNormalization(x)
print("InstanceNormalization diff: %f" % torch.sum(torch.abs(official_in-my_in)))

official_gn = gn(x)
my_gn = GroupNormalization(x, num_groups=4)
print("GroupNormalization diff: %f" % torch.sum(torch.abs(official_gn-my_gn)))

我們將自己計算的結果與pytorch中官方實現的結果進行比較,結果如下:

BatchNormalization diff: 8.509657
LayerNormalization diff: 4.204389
InstanceNormalization diff: 86.581696
GroupNormalization diff: 17.165443

其中IN的實現結果與官方實現差距較大,但是原因不明,理論上代碼應該沒有問題

此外,上文中的mean_std函數爲自己寫的支持多維度同時求均值和方差的函數(pytorch1.1.0版本前torch.std函數不支持多維度同時計算)

def mean_std(x, dim, keepdim=False, eps=1e-5):
    dim = list(dim) if not isinstance(dim, int) else [dim]
    size = list(x.size())
    dims = len(size)
    permute_dim = [i for i in range(dims) if i not in dim]
    permute_dim += dim
    x = x.permute(permute_dim)
    view_size = [size[i] for i in range(dims) if i not in dim]
    view_size += [-1]
    x = x.contiguous()
    x = x.view(view_size)
    mean = torch.mean(x, dim=-1, keepdim=False)
    std = torch.std(x, dim=-1, keepdim=False) + eps
    if keepdim:
        final_size = [size[i] if i not in dim else 1 for i in range(dims)]
        mean = mean.view(final_size)
        std = std.view(final_size)
    return mean, std
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章