超分辨率網絡ESPCN中的pixel shuffle--幾種代碼思路(基於TF, Pytorch)

方法一:

源碼來自github:https://github.com/JuheonYi/VESPCN-tensorflow 中 ESPCN部分

首先簡單的來看ESPCN的網絡結構搭建 conv--conv--conv--ps

    def network(self, LR):
        feature_tmp = tf.layers.conv2d(LR, 64, 5, strides=1, padding='SAME', name='CONV_1',
                                kernel_initializer=tf.contrib.layers.xavier_initializer(), reuse=tf.AUTO_REUSE)
        feature_tmp = tf.nn.relu(feature_tmp)

        feature_tmp = tf.layers.conv2d(feature_tmp, 32, 3, strides=1, padding='SAME', name='CONV_2',
                                kernel_initializer=tf.contrib.layers.xavier_initializer(), reuse=tf.AUTO_REUSE)
        feature_tmp = tf.nn.relu(feature_tmp)

        feature_out = tf.layers.conv2d(feature_tmp, self.channels*self.scale*self.scale, 3, strides=1, padding='SAME', 
                            name='CONV_3', kernel_initializer = tf.contrib.layers.xavier_initializer())

        feature_out = PS(feature_out, self.scale, color=False)

        feature_out = tf.layers.conv2d(feature_out, 1, 1, strides=1, padding='SAME', 
                        name = 'CONV_OUT', kernel_initializer=tf.contrib.layers.xavier_initializer(), reuse=tf.AUTO_REUSE)
        return feature_out

其中PS操作便是pixel shuffle

PS操作:其實就是將H * W * C * r * r  ==>  rH * rW * C  將其從H * W 放大爲 rH * rW

def PS(X, r, color=False):
    #print("Input X shape:",X.get_shape(),"scale:",r)
    if color:
        Xc = tf.split(X, 3, 3)
        X = tf.concat([_phase_shift(x, r) for x in Xc], 3)   #each of x in Xc is r * r channel 分別每一個通道變爲r*r
    else:
        X = _phase_shift_1dim(X, r)
    #print("output X shape:",X.get_shape())
    return X

tf.split方法請移步tensorflow API:https://www.tensorflow.org/api_docs/python/tf/split  或者直接google

總之結果就是得到一個Xc(三通道,每一通道爲H * W * r * r) 隨後分辨遍歷每一個通道 將r 與H W混合(shuffle)

具體操作:

def _phase_shift(I, r):
    #print("I:",I,"I.get_shape():",I.get_shape().as_list())
    bsize, a, b, c = I.get_shape().as_list()
    bsize = tf.shape(I)[0] # Handling Dimension(None) type for undefined batch dim
    X = tf.reshape(I, (bsize, a, b, r, r))
    #X = tf.transpose(X, (0, 1, 2, 4, 3))  # bsize, a, b, 1, 1
    X = tf.split(X, a, 1)  # a, [bsize, b, r, r]  分成了a緯 每一維都是1
    X = tf.concat([tf.squeeze(x, axis=1) for x in X], 2)  # bsize, b, a*r, r
    X = tf.split(X, b, 1)  # b, [bsize, a*r, r]
    X = tf.concat([tf.squeeze(x, axis=1) for x in X], 2)  # bsize, a*r, b*r
   
    return tf.reshape(X, (bsize, a*r, b*r, 1))

其中重點 在split和concat中,這兩步進行了pixel的拆分與重組 將a變爲r * a ,b同理。

 

方法二:

來自:https://github.com/drakelevy/ESPCN-TensorFlow

shuffle操作如下:

def shuffle(input_image, ratio):
    shape = input_image.shape
    height = int(shape[0]) * ratio
    width = int(shape[1]) * ratio
    channels = int(shape[2]) // ratio // ratio
    shuffled = np.zeros((height, width, channels), dtype=np.uint8)
    for i in range(0, height):
        for j in range(0, width):
            for k in range(0, channels):
                #每一個像素 都是三通道疊加
                shuffled[i,j,k] = input_image[i // ratio, j // ratio, k * ratio * ratio + (i % ratio) * ratio + (j % ratio)]
    return shuffled

簡單粗暴 直接打亂重組 直接根據原圖拼接一張新圖片(使用python的思想來理解,一個三維數組,分別對每一維度,即每一個數組進行處理),每一個像素點分別控制。

 

而在pytorch在中:官方提供了pixel shuffle方法:

CLASS torch.nn.PixelShuffle(upscale_factor)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章