Keras學習---RNN模型建立篇

本例子是“IMDB sentiment classification task”,用單層LSTM實現。

1. 輸入數據預處理
輸入文本數據統一規整到長度maxlen=80個單詞,爲什麼呢?
是不是長度太長時訓練容易發散掉,這樣就限制了記憶的長度了。
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
有沒有動態的呢?因爲輸入的句子長度本身是動態長度的。

2. Embedding layer
代碼中,max_features=20000對應的是詞彙量大小。
關於Embedding Vector,如果是中文的,怎麼處理呢?
在RNN訓練過程中Embedding Vector是否也參與了訓練呢?如何選擇?

3. 關於RNN的模型架構的理解
Embedding Vector是128維的,隱層是128個節點(也可以是其他數值)。
Embedding Vector與隱層的節點是全連接的,隱層每個節點自帶存儲單元的。
在這個128節點的隱層之上,有一個Dense節點。這個Dense節點是和128節點的隱層全連接的。

Foward過程就是每次輸入80個單詞中的一個,直到最後一個單詞輸入結束,Dense節點最終的輸出就是估計的Y值了

完整代碼如下:
'''Trains a LSTM on the IMDB sentiment classification task.

The dataset is actually too small for LSTM to be of any advantage

compared to simpler, much faster methods such as TF-IDF + LogReg.

Notes:



- RNNs are tricky. Choice of batch size is important,

choice of loss and optimizer is critical, etc.

Some configurations won't converge.



- LSTM loss decrease patterns during training can be quite different

from what you see with CNNs/MLPs/etc.

'''

from __future__ import print_function



from keras.preprocessing import sequence

from keras.models import Sequential

from keras.layers import Dense, Embedding

from keras.layers import LSTM

from keras.datasets import imdb



max_features = 20000

maxlen = 80  # cut texts after this number of words (among top max_features most common words)

batch_size = 32



print('Loading data...')

(x_train, y_train), (x_test, y_test) = imdb.load_data(nb_words=max_features)

print(len(x_train), 'train sequences')

print(len(x_test), 'test sequences')



print('Pad sequences (samples x time)')

x_train = sequence.pad_sequences(x_train, maxlen=maxlen)

x_test = sequence.pad_sequences(x_test, maxlen=maxlen)

print('x_train shape:', x_train.shape)

print('x_test shape:', x_test.shape)



print('Build model...')

model = Sequential()

model.add(Embedding(max_features, 128))

#model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(LSTM(128))


model.add(Dense(1, activation='sigmoid'))



# try using different optimizers and different optimizer configs

model.compile(loss='binary_crossentropy',

              optimizer='adam',

              metrics=['accuracy'])



print('Train...')

model.fit(x_train, y_train,

          batch_size=batch_size,

          nb_epoch=15,

          validation_data=(x_test, y_test))

score, acc = model.evaluate(x_test, y_test,

                            batch_size=batch_size)

print('Test score:', score)

print('Test accuracy:', acc)


通過summary()得到的函數統計如下:
____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to
====================================================================================================
embedding_3 (Embedding)          (None, None, 128)     2560000     embedding_input_1[0][0]
____________________________________________________________________________________________________
lstm_1 (LSTM)                    (None, 128)           131584      embedding_3[0][0]
____________________________________________________________________________________________________
dense_9 (Dense)                  (None, 1)             129         lstm_1[0][0]
====================================================================================================
Total params: 2,691,713
Trainable params: 2,691,713
Non-trainable params: 0



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章