keras split tensor/slice tensor/分割tensor

How to split a tensor in keras

引言

運行如下代碼:

# coding=utf-8
from keras.layers import *

if __name__ == '__main__':
    import numpy as np
    inp1 = Input(batch_shape=(None, 12, 30, 40))
    inp2 = Input(batch_shape=(None, 12, 30, 40))

    # 分割&Concat
    concat = merge([inp1[:, 0:6, ...], inp2], mode='concat', concat_axis=-1)
    conv = Convolution2D(32, 3, 3, name='conv1')(concat)

會報如下錯誤:

Exception: You tried to call layer "conv1". This layer has no information about its expected input shape, and thus cannot be built. You can build it manually via: `layer.build(batch_input_shape)`

實際上,我的需求很簡單,把inp1分一半通道出來,和inp2合併,然後再進行卷積,多簡單的操作,keras還報個錯。這個錯誤其的很迷,我搜索嘗試了一下午,都搞不定。。。

晚上靈光一現,是不是需要自定義一個keras Layer,在給出input_shape的情況下給出output_shape呢?這樣後面的層就能推測出shape了。

自定義一個Split層

keras不同的版本實現自定義層要實現的接口稍有不同,我用的keras版本是1.1.0,所以我參考:http://faroit.com/keras-docs/1.1.0/layers/writing-your-own-keras-layers/。keras 1.1.0中,一個自定義層如下:

from keras import backend as K
from keras.engine.topology import Layer
import numpy as np

class MyLayer(Layer):
    def __init__(self, output_dim, **kwargs):
        self.output_dim = output_dim
        super(MyLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        input_dim = input_shape[1]
        initial_weight_value = np.random.random((input_dim, output_dim))
        self.W = K.variable(initial_weight_value)
        self.trainable_weights = [self.W]

    def call(self, x, mask=None):
        return K.dot(x, self.W)

    def get_output_shape_for(self, input_shape):
        return (input_shape[0], self.output_dim)

需要實現__init__buildcallget_output_shape_for等接口。如果分割一個tensor,需要輸出多個tensor,所以還需要實現compute_mask,返回一個shape的列表。

綜上,實現一個Split層如下:

# coding=utf-8
from keras.layers import *


class Split(Layer):
    def __init__(self, **kwargs):
        super(Split, self).__init__(**kwargs)

    def build(self, input_shape):
        # 調用父類的build函數,build本層
        super(Split, self).build(input_shape)
        # 存一下shape,其他函數要用
        self.shape = input_shape

    def call(self, x, mask=None):
        # 將x分割爲兩個tensor
        seq = [x[:, 0:self.shape[1] // 2, ...],
               x[:, self.shape[1] // 2:, ...]]
        return seq

    def compute_mask(self, inputs, input_mask=None):
        # 本層輸出兩個tensor,需要返回多個mask,mask可以爲None
        return [None, None]

    def get_output_shape_for(self, input_shape):
        # 本層返回兩個tensor,就要返回兩個tensor的shape
        shape0 = list(self.shape)
        shape1 = list(self.shape)
        shape0[1] = self.shape[1] // 2
        shape1[1] = self.shape[1] - self.shape[1] // 2
        # print [shape0, shape1]
        return [shape0, shape1]

使用上述的Spilt層:

if __name__ == '__main__':
    import numpy as np
    inp1 = Input(batch_shape=(None, 17, 30, 40))
    inp2 = Input(batch_shape=(None, 12, 30, 40))
    sp = Split()(inp1)
    # 分割&Concat
    concat = merge([sp[0], inp2], mode='concat', concat_axis=1)
    conv = Convolution2D(32, 3, 3, name='conv1')(concat)

    data = {inp1: np.random.rand(12, 17, 30, 40).astype(np.float32),
            inp2: np.random.rand(12, 12, 30, 40).astype(np.float32)}
    print sp[0].eval({inp1: np.random.rand(12, 17, 30, 40).astype(np.float32)}).shape
    print sp[1].eval({inp1: np.random.rand(12, 17, 30, 40).astype(np.float32)}).shape
    print concat.eval(data).shape
    print conv.eval(data).shape

運行上面的程序,輸出:

(12L, 8L, 30L, 40L)
(12L, 9L, 30L, 40L)
(12L, 20L, 30L, 40L)
(12L, 32L, 28L, 38L)

滿足需求又不報錯。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章