keras split tensor/slice tensor/分割tensor

How to split a tensor in keras

引言

运行如下代码:

# coding=utf-8
from keras.layers import *

if __name__ == '__main__':
    import numpy as np
    inp1 = Input(batch_shape=(None, 12, 30, 40))
    inp2 = Input(batch_shape=(None, 12, 30, 40))

    # 分割&Concat
    concat = merge([inp1[:, 0:6, ...], inp2], mode='concat', concat_axis=-1)
    conv = Convolution2D(32, 3, 3, name='conv1')(concat)

会报如下错误:

Exception: You tried to call layer "conv1". This layer has no information about its expected input shape, and thus cannot be built. You can build it manually via: `layer.build(batch_input_shape)`

实际上,我的需求很简单,把inp1分一半通道出来,和inp2合并,然后再进行卷积,多简单的操作,keras还报个错。这个错误其的很迷,我搜索尝试了一下午,都搞不定。。。

晚上灵光一现,是不是需要自定义一个keras Layer,在给出input_shape的情况下给出output_shape呢?这样后面的层就能推测出shape了。

自定义一个Split层

keras不同的版本实现自定义层要实现的接口稍有不同,我用的keras版本是1.1.0,所以我参考:http://faroit.com/keras-docs/1.1.0/layers/writing-your-own-keras-layers/。keras 1.1.0中,一个自定义层如下:

from keras import backend as K
from keras.engine.topology import Layer
import numpy as np

class MyLayer(Layer):
    def __init__(self, output_dim, **kwargs):
        self.output_dim = output_dim
        super(MyLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        input_dim = input_shape[1]
        initial_weight_value = np.random.random((input_dim, output_dim))
        self.W = K.variable(initial_weight_value)
        self.trainable_weights = [self.W]

    def call(self, x, mask=None):
        return K.dot(x, self.W)

    def get_output_shape_for(self, input_shape):
        return (input_shape[0], self.output_dim)

需要实现__init__buildcallget_output_shape_for等接口。如果分割一个tensor,需要输出多个tensor,所以还需要实现compute_mask,返回一个shape的列表。

综上,实现一个Split层如下:

# coding=utf-8
from keras.layers import *


class Split(Layer):
    def __init__(self, **kwargs):
        super(Split, self).__init__(**kwargs)

    def build(self, input_shape):
        # 调用父类的build函数,build本层
        super(Split, self).build(input_shape)
        # 存一下shape,其他函数要用
        self.shape = input_shape

    def call(self, x, mask=None):
        # 将x分割为两个tensor
        seq = [x[:, 0:self.shape[1] // 2, ...],
               x[:, self.shape[1] // 2:, ...]]
        return seq

    def compute_mask(self, inputs, input_mask=None):
        # 本层输出两个tensor,需要返回多个mask,mask可以为None
        return [None, None]

    def get_output_shape_for(self, input_shape):
        # 本层返回两个tensor,就要返回两个tensor的shape
        shape0 = list(self.shape)
        shape1 = list(self.shape)
        shape0[1] = self.shape[1] // 2
        shape1[1] = self.shape[1] - self.shape[1] // 2
        # print [shape0, shape1]
        return [shape0, shape1]

使用上述的Spilt层:

if __name__ == '__main__':
    import numpy as np
    inp1 = Input(batch_shape=(None, 17, 30, 40))
    inp2 = Input(batch_shape=(None, 12, 30, 40))
    sp = Split()(inp1)
    # 分割&Concat
    concat = merge([sp[0], inp2], mode='concat', concat_axis=1)
    conv = Convolution2D(32, 3, 3, name='conv1')(concat)

    data = {inp1: np.random.rand(12, 17, 30, 40).astype(np.float32),
            inp2: np.random.rand(12, 12, 30, 40).astype(np.float32)}
    print sp[0].eval({inp1: np.random.rand(12, 17, 30, 40).astype(np.float32)}).shape
    print sp[1].eval({inp1: np.random.rand(12, 17, 30, 40).astype(np.float32)}).shape
    print concat.eval(data).shape
    print conv.eval(data).shape

运行上面的程序,输出:

(12L, 8L, 30L, 40L)
(12L, 9L, 30L, 40L)
(12L, 20L, 30L, 40L)
(12L, 32L, 28L, 38L)

满足需求又不报错。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章