函數原型
tf.nn.dynamic_rnn(
cell,
inputs,
sequence_length=None,
initial_state=None,
dtype=None,
parallel_iterations=None,
swap_memory=False,
time_major=False,
scope=None
)
參數講解:
-
cell: RNNCell的一個實例.
-
inputs: RNN輸入.
- 如果time_major == False(默認), 則是一個shape爲[batch_size, max_time, input_size]的Tensor,或者這些元素的嵌套元組。
- 如果time_major == True,則是一個shape爲[max_time, batch_size, input_size]的Tensor,或這些元素的嵌套元組。
-
sequence_length: (可選)大小爲[batch_size],數據的類型是int32/int64向量。如果當前時間步的index超過該序列的實際長度時,則該時間步不進行計算,RNN的state複製上一個時間步的,同時該時間步的輸出全部爲零。
-
initial_state: (可選)RNN的初始state(狀態)。如果cell.state_size(一層的RNNCell)是一個整數,那麼它必須是一個具有適當類型和形狀的張量[batch_size,cell.state_size]。如果cell.state_size是一個元組(多層的RNNCell,如MultiRNNCell),那麼它應該是一個張量元組,每個元素的形狀爲[batch_size,s] for s in cell.state_size。
-
time_major: inputs 和outputs 張量的形狀格式。如果爲True,則這些張量都應該是(都會是)[max_time, batch_size, depth]。如果爲false,則這些張量都應該是(都會是)[batch_size,max_time, depth]。time_major=true說明輸入和輸出tensor的第一維是max_time。否則爲batch_size。
使用time_major =True更有效,因爲它避免了RNN計算開始和結束時的轉置.但是,大多數TensorFlow數據都是batch-major,因此默認情況下,此函數接受輸入並以batch-major形式發出輸出.
返回值:
一對(outputs, state),其中:
-
**outputs:**RNN輸出Tensor.
- 如果time_major == False(默認),這將是shape爲[batch_size, max_time, cell.output_size]的Tensor.
- 如果time_major == True,這將是shape爲[max_time, batch_size, cell.output_size]的Tensor.
-
state: 最終的狀態.
- 一般情況下state的形狀爲 [batch_size, cell.output_size ]
- 如果cell是LSTMCells,則state將是包含每個單元格的LSTMStateTuple的元組,state的形狀爲[2,batch_size, cell.output_size ]
實列講解
import tensorflow as tf
import numpy as np
n_steps = 2
n_inputs = 3
n_neurons = 5 # 也就是hidden_size
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
seq_length = tf.placeholder(tf.int32, [None])
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32,
sequence_length=seq_length)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2])
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print("outputs_val.shape:", outputs_val.shape, "states_val.shape:", states_val.shape)
print("outputs_val:", outputs_val, "states_val:", states_val)
輸出
outputs_val.shape: (4, 2, 5) states_val.shape: (4, 5)
outputs_val:
[[[ 0.53073734 -0.61281306 -0.5437517 0.7320347 -0.6109526 ]
[ 0.99996936 0.99990636 -0.9867181 0.99726075 -0.99999976]]
[[ 0.9931584 0.5877845 -0.9100412 0.988892 -0.9982337 ]
[ 0. 0. 0. 0. 0. ]]
[[ 0.99992317 0.96815354 -0.985101 0.9995968 -0.9999936 ]
[ 0.99948144 0.9998127 -0.57493806 0.91015154 -0.99998355]]
[[ 0.99999255 0.9998929 0.26732785 0.36024097 -0.99991137]
[ 0.98875254 0.9922327 0.6505734 0.4732064 -0.9957567 ]]]
states_val:
[[ 0.99996936 0.99990636 -0.9867181 0.99726075 -0.99999976]
[ 0.9931584 0.5877845 -0.9100412 0.988892 -0.9982337 ]
[ 0.99948144 0.9998127 -0.57493806 0.91015154 -0.99998355]
[ 0.98875254 0.9922327 0.6505734 0.4732064 -0.9957567 ]]
上面代碼搭建的RNN網絡如下圖所示
上圖中:橢圓表示tensor,矩形表示RNN cell。
首先tf.nn.dynamic_rnn()
的time_major
是默認的false,故輸入X應該是一個的tensor,注意我們這裏調用的是BasicRNNCell
,只有一層循環網絡,outputs是最後一層每個step的輸出,它的結構是,states是每一層的最後那個step的輸出,由於本例中,我們的循環網絡只有一個隱藏層,所以它就代表這一層的最後那個step的輸出,因此它和step的大小是沒有關係的,我們的X有4個樣本組成,隱層神經元個數爲n_neurons是5,因此states的結構就是,最後我們觀察數據,states的每條數據正好就是outputs的最後一個step的輸出。
下面我們繼續講解多個隱藏層的情況,這裏是三個隱藏層,注意我們這裏仍然是調用BasicRNNCell
import tensorflow as tf
import numpy as np
n_steps = 2
n_inputs = 3
n_neurons = 5
n_layers = 3
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
seq_length = tf.placeholder(tf.int32, [None])
layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons,
activation=tf.nn.relu)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32, sequence_length=seq_length)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2])
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print("outputs_val.shape:", outputs, "states_val.shape:", states)
print("outputs_val:", outputs_val, "states_val:", states_val)
輸出
outputs_val.shape:
Tensor("rnn/transpose_1:0", shape=(?, 2, 5), dtype=float32)
states_val.shape:
(<tf.Tensor 'rnn/while/Exit_3:0' shape=(?, 5) dtype=float32>,
<tf.Tensor 'rnn/while/Exit_4:0' shape=(?, 5) dtype=float32>,
<tf.Tensor 'rnn/while/Exit_5:0' shape=(?, 5) dtype=float32>)
outputs_val:
[[[0. 0. 0. 0. 0. ]
[0. 0.18740742 0. 0.2997518 0. ]]
[[0. 0.07222144 0. 0.11551574 0. ]
[0. 0. 0. 0. 0. ]]
[[0. 0.13463384 0. 0.21534224 0. ]
[0.03702604 0.18443246 0. 0.34539366 0. ]]
[[0. 0.54511094 0. 0.8718864 0. ]
[0.5382122 0. 0.04396425 0.4040263 0. ]]]
states_val:
(array([[0. , 0.83723307, 0. , 0. , 2.8518028 ],
[0. , 0.1996038 , 0. , 0. , 1.5456247 ],
[0. , 1.1372368 , 0. , 0. , 0.832613 ],
[0. , 0.7904129 , 2.4675028 , 0. , 0.36980057]],
dtype=float32),
array([[0.6524607 , 0. , 0. , 0. , 0. ],
[0.25143963, 0. , 0. , 0. , 0. ],
[0.5010576 , 0. , 0. , 0. , 0. ],
[0. , 0.3166597 , 0.4545995 , 0. , 0. ]],
dtype=float32),
array([[0. , 0.18740742, 0. , 0.2997518 , 0. ],
[0. , 0.07222144, 0. , 0.11551574, 0. ],
[0.03702604, 0.18443246, 0. , 0.34539366, 0. ],
[0.5382122 , 0. , 0.04396425, 0.4040263 , 0. ]],
dtype=float32))
多層的RNN網絡如下圖所示
我們說過,outputs是最後一層的輸出,即
states是每一層的最後一個step的輸出,即三個結構爲 的tensor繼續觀察數據,states中的最後一個array,正好是outputs的最後那個step的輸出。
下面我們繼續講當由BasicLSTMCell
構造單元工廠的時候,只講多層的情況,我們只需要將上面的 BasicRNNCell
替換成BasicLSTMCell
就行了,打印信息如下:
outputs_val.shape:
Tensor("rnn/transpose_1:0", shape=(?, 2, 5), dtype=float32)
states_val.shape:
(LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_3:0' shape=(?, 5) dtype=float32>,
h=<tf.Tensor 'rnn/while/Exit_4:0' shape=(?, 5) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_5:0' shape=(?, 5) dtype=float32>,
h=<tf.Tensor 'rnn/while/Exit_6:0' shape=(?, 5) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_7:0' shape=(?, 5) dtype=float32>,
h=<tf.Tensor 'rnn/while/Exit_8:0' shape=(?, 5) dtype=float32>))
outputs_val:
[[[1.2949290e-04 0.0000000e+00 2.7623639e-04 0.0000000e+00 0.0000000e+00]
[9.4675866e-05 0.0000000e+00 2.0214770e-04 0.0000000e+00 0.0000000e+00]]
[[4.3100454e-06 4.2123037e-07 1.4312843e-06 0.0000000e+00 0.0000000e+00]
[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]]
[[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]]
[[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]]]
states_val:
(LSTMStateTuple(
c=array([[0. , 0. , 0.04676079, 0.04284539, 0. ],
[0. , 0. , 0.0115245 , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ]],
dtype=float32),
h=array([[0. , 0. , 0.00035096, 0.04284406, 0. ],
[0. , 0. , 0.00142574, 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. ]],
dtype=float32)),
LSTMStateTuple(
c=array([[0.0000000e+00, 1.0477135e-02, 4.9871090e-03, 8.2785974e-04,
0.0000000e+00],
[0.0000000e+00, 2.3306280e-04, 0.0000000e+00, 9.9445322e-05,
5.9535629e-05],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00]], dtype=float32),
h=array([[0.00000000e+00, 5.23016974e-03, 2.47756205e-03, 4.11730434e-04,
0.00000000e+00],
[0.00000000e+00, 1.16522635e-04, 0.00000000e+00, 4.97301044e-05,
2.97713632e-05],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00]], dtype=float32)),
LSTMStateTuple(
c=array([[1.8937115e-04, 0.0000000e+00, 4.0442235e-04, 0.0000000e+00,
0.0000000e+00],
[8.6200516e-06, 8.4243663e-07, 2.8625946e-06, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00]], dtype=float32),
h=array([[9.4675866e-05, 0.0000000e+00, 2.0214770e-04, 0.0000000e+00,
0.0000000e+00],
[4.3100454e-06, 4.2123037e-07, 1.4312843e-06, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00]], dtype=float32)))
LSTM的網絡結構如下圖:
一個LSTM cell有兩個狀態和,而不是像一個RNN cell一樣只有。
關於LSTM的講解可以看博客:LSTM理論知識講解
在tensorflow中,將一個LSTM cell的和合在一起,稱爲LSTMStateTuple
。
因此我們的states包含三個LSTMStateTuple,每一個LSTMStateTuple表示每一層的最後一個step的輸出,這個輸出有兩個信息,一個是表示短期記憶信息,一個是表示長期記憶信息。維度都是[batch_size,n_neurons] = [4,5],states的最後一個LSTMStateTuple中的就是outputs的最後一個step的輸出
參考博客:https://blog.csdn.net/junjun150013652/article/details/81331448