注意力機制論文:Concurrent Spatial and Channel SE in Fully Convolutional Networks及其Pytorch實現

Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks
PDF: https://arxiv.org/pdf/1803.02579v2.pdf
PyTorch代碼: https://github.com/shanglianlm0525/PyTorch-Networks

1 概述

本文對SE模塊進行了改進,設計了三種 SE 變形結構cSE、sSE、scSE,在 MRI 腦分割 和 CT 器官分割任務上取得了可觀的改進。

在這裏插入圖片描述

2 Spatial Squeeze and Channel Excitation Block (cSE)

即原始的SE Block , 詳細見 Attention論文:Squeeze-and-Excitation Networks及其PyTorch實現
PyTorch代碼:

class SE_Module(nn.Module):
    def __init__(self, channel,ratio = 16):
        super(SE_Module, self).__init__()
        self.squeeze = nn.AdaptiveAvgPool2d(1)
        self.excitation = nn.Sequential(
                nn.Linear(in_features=channel, out_features=channel // ratio),
                nn.ReLU(inplace=True),
                nn.Linear(in_features=channel // ratio, out_features=channel),
                nn.Sigmoid()
            )
    def forward(self, x):
        b, c, _, _ = x.size()
        y = self.squeeze(x).view(b, c)
        z = self.excitation(y).view(b, c, 1, 1)
        return x * z.expand_as(x)

3 Channel Squeeze and Spatial Excitation Block (sSE)

PyTorch代碼:

class sSE_Module(nn.Module):
    def __init__(self, channel):
        super(sSE_Module, self).__init__()
        self.spatial_excitation = nn.Sequential(
                nn.Conv2d(in_channels=channel, out_channels=1, kernel_size=1,stride=1,padding=0),
                nn.Sigmoid()
            )
    def forward(self, x):
        z = self.spatial_excitation(x)
        return x * z.expand_as(x)

4 Spatial and Channel Squeeze & Excitation Block (scSE)

PyTorch代碼:

class scSE_Module(nn.Module):
    def __init__(self, channel,ratio = 16):
        super(scSE_Module, self).__init__()
        self.cSE = cSE_Module(channel,ratio)
        self.sSE = sSE_Module(channel)

    def forward(self, x):
        return self.cSE(x) + self.sSE(x)

5 實驗結果

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章