卷積神經網絡中Attention注意力機制(CBAM)

論文:cbam
CBAM: Convolutional Block Attention Module

attention機制,簡單說就是從特徵中學習或者提取出權重分佈,再拿這個權重分佈施加在原來的特徵之上,改變原有特徵的分佈,增強有效特徵抑制無效的特徵或者是噪音

attention可以作用在原圖上,也可以作用在特徵圖上
可以在空間尺度上也可以在channel尺度上加權

論文中將空間尺度和channe尺度的attention融合成一個模塊,叫做cbam
在這裏插入圖片描述
在這裏插入圖片描述

k爲spatial attention時融合avg pool和max pool的信息時採用的卷積核大小
k=7有最好的效果
在這裏插入圖片描述

這一部分實驗表明先進性channel attention再進行spatial attention效果更好
在這裏插入圖片描述

自己實際實驗時發現,加入cbam模塊後變慢了不少
而且在淺層網絡中加入cbam和使用se基本沒有區別

'''
channel attention + spatial attention
'''

# -*- coding: UTF-8 -*-
#!/usr/bin/python
from __future__ import absolute_import
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
slim = tf.contrib.slim

def combined_static_and_dynamic_shape(tensor):
  """Returns a list containing static and dynamic values for the dimensions.

  Returns a list of static and dynamic values for shape dimensions. This is
  useful to preserve static shapes when available in reshape operation.

  Args:
    tensor: A tensor of any type.

  Returns:
    A list of size tensor.shape.ndims containing integers or a scalar tensor.
  """
  static_tensor_shape = tensor.shape.as_list()
  dynamic_tensor_shape = tf.shape(tensor)
  combined_shape = []
  for index, dim in enumerate(static_tensor_shape):
    if dim is not None:
      combined_shape.append(dim)
    else:
      combined_shape.append(dynamic_tensor_shape[index])
  return combined_shape

def convolutional_block_attention_module(feature_map, inner_units_ratio=0.5):
    """
    CBAM: convolution block attention module, which is described in "CBAM: Convolutional Block Attention Module"
    Architecture : "https://arxiv.org/pdf/1807.06521.pdf"
    If you want to use this module, just plug this module into your network
    :param feature_map : input feature map
    :param index : the index of convolution block attention module
    :param inner_units_ratio: output units number of fully connected layer: inner_units_ratio*feature_map_channel
    :return:feature map with channel and spatial attention
    """
    with tf.variable_scope("cbam"):
        feature_map_shape = combined_static_and_dynamic_shape(feature_map)
#        print('feature_map_shape:',feature_map_shape)
        # channel attention
        channel_avg_weights = tf.nn.avg_pool(
            value=feature_map,
            ksize=[1, feature_map_shape[1], feature_map_shape[2], 1],
            strides=[1, 1, 1, 1],
            padding='VALID'
        )
        channel_max_weights = tf.nn.max_pool(
            value=feature_map,
            ksize=[1, feature_map_shape[1], feature_map_shape[2], 1],
            strides=[1, 1, 1, 1],
            padding='VALID'
        )
        channel_avg_reshape = tf.reshape(channel_avg_weights,
                                         [feature_map_shape[0], 1, feature_map_shape[3]])
        channel_max_reshape = tf.reshape(channel_max_weights,
                                         [feature_map_shape[0], 1, feature_map_shape[3]])
#        print('channel_max_reshape:',channel_max_reshape.get_shape().as_list())
        channel_w_reshape = tf.concat([channel_avg_reshape, channel_max_reshape], axis=1)
#        print('channel_w_reshape:',channel_w_reshape.get_shape().as_list())
        fc_1 = tf.layers.dense(
            inputs=channel_w_reshape,
            units=feature_map_shape[3] * inner_units_ratio,
            name="fc_1",
            activation=tf.nn.relu
        )
        fc_2 = tf.layers.dense(
            inputs=fc_1,
            units=feature_map_shape[3],
            name="fc_2",
            activation=None
        )
        channel_attention = tf.reduce_sum(fc_2, axis=1, name="channel_attention_sum")
        channel_attention = tf.nn.sigmoid(channel_attention, name="channel_attention_sum_sigmoid")
        channel_attention = tf.reshape(channel_attention, shape=[feature_map_shape[0], 1, 1, feature_map_shape[3]])
        
        feature_map_with_channel_attention = tf.multiply(feature_map, channel_attention)
        
        
        
        # spatial attention
        channel_wise_avg_pooling = tf.reduce_mean(feature_map_with_channel_attention, axis=3)
        channel_wise_max_pooling = tf.reduce_max(feature_map_with_channel_attention, axis=3)
#        print('channel_wise_avg_pooling:',channel_wise_avg_pooling.get_shape().as_list())
        channel_wise_avg_pooling = tf.reshape(channel_wise_avg_pooling,
                                              shape=[feature_map_shape[0], feature_map_shape[1], feature_map_shape[2],
                                                     1])
        channel_wise_max_pooling = tf.reshape(channel_wise_max_pooling,
                                              shape=[feature_map_shape[0], feature_map_shape[1], feature_map_shape[2],
                                                     1])

        channel_wise_pooling = tf.concat([channel_wise_avg_pooling, channel_wise_max_pooling], axis=3)
        spatial_attention = slim.conv2d(
            channel_wise_pooling,
            1,
            [7, 7],
            padding='SAME',
            activation_fn=tf.nn.sigmoid,
            scope="spatial_attention_conv"
        )
        feature_map_with_attention = tf.multiply(feature_map_with_channel_attention, spatial_attention)
        return feature_map_with_attention





#example
#feature_map = tf.constant(np.random.rand(50,8,8,32), dtype=tf.float16)
#feature_map_with_attention = convolutional_block_attention_module(feature_map, 1)
#
#with tf.Session() as sess:
#    init = tf.global_variables_initializer()
#    sess.run(init)
#    result = sess.run(feature_map_with_attention)
#    print(result.shape)

參考文章:https://www.jianshu.com/p/3e33ab049b4e

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章