報錯爲:Output tensors to a Model must be the output of a TensorFlow `Layer`
再編寫TEXT-CNN模型時,代碼報錯,以下爲報錯代碼:
convs = []
inputs = keras.layers.Input(shape=(256,))
embed1 = keras.layers.Embedding(10000, 32)(inputs)
# embed = keras.layers.Reshape(-1,256, 32, 1)(embed1)
print(embed1[0])
embed = tf.reshape(embed1, [-1, 256, 32, 1])
print(embed[0])
l_conv1 = keras.layers.Conv2D(filters=3, kernel_size=(2, 32), activation='relu')(embed) #現長度 = 1+(原長度-卷積核大小+2*填充層大小) /步長 卷積核的形狀(fsz,embedding_size)
l_pool1 = keras.layers.MaxPooling2D(pool_size=(255, 1))(l_conv1) # 這裏面最大的不同 池化層核的大小與卷積完的數據長度一樣
l_pool11 = keras.layers.Flatten()(l_pool1) #一般爲卷積網絡最近全連接的前一層,用於將數據壓縮成一維
convs.append(l_pool11)
l_conv2 = keras.layers.Conv2D(filters=3, kernel_size=(3, 32), activation='relu')(embed)
l_pool2 = keras.layers.MaxPooling2D(pool_size=(254, 1))(l_conv2)
l_pool22 = keras.layers.Flatten()(l_pool2)
convs.append(l_pool22)
l_conv3 = keras.layers.Conv2D(filters=3, kernel_size=(4, 32), activation='relu')(embed)
l_pool3 = keras.layers.MaxPooling2D(pool_size=(253, 1))(l_conv3)
l_pool33 = keras.layers.Flatten()(l_pool2)
convs.append(l_pool33)
merge = keras.layers.concatenate(convs, axis=1)
out = keras.layers.Dropout(0.5)(merge)
output = keras.layers.Dense(32, activation='relu')(out)
pred = keras.layers.Dense(units=1, activation='sigmoid')(output)
model = keras.models.Model(inputs=inputs, outputs=pred)
# adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.summary()
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
經過網上查找,找到了問題所在:在使用keras編程模式是,中間插入了tf.reshape()方法便遇到此問題。
解決辦法:對於遇到相同問題的任何人,可以使用keras的Lambda層來包裝張量流操作,這是我所做的:
embed1 = keras.layers.Embedding(10000, 32)(inputs)
# embed = keras.layers.Reshape(-1,256, 32, 1)(embed1)
# embed = tf.reshape(embed1, [-1, 256, 32, 1])
def reshapes(embed1):
embed = tf.reshape(embed1, [-1, 256, 32, 1])
return embed
embed = keras.layers.Lambda(reshapes)(embed1)