深度學習之tensorflow(六)

#RNN(Recurrent Neural Network 循環神經網絡)

  • BP神經網絡沒有反饋迴路,而RNN有。

RNN存在梯度消失的問題,隨時間的流逝信號會不斷地衰弱:

 

#LSTM(Long Short Term Memory)

  • 輸出門:判斷信號輸出多少;
  • 輸入門:判斷信號能不能輸入,如果信號有用就讓它輸入,如果沒有用就讓它變成0;
  • 忘記門:判斷信號衰減程度;
  • 三個門都是經過訓練的。

工作時的信號傳遞:

LSTM可以控制信號:

 

#LSTM在tensorflow中的實現

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

#載入數據集
mnist = input_data.read_data_sets('MNIST_data/',one_hot=True)

#輸入圖片是28*28
n_inputs = 28 #輸入一行,一行有28個數據
max_time = 28 #一共28行
lstm_size = 100 #隱層單元
n_classes = 10 #10個分類
batch_size = 50 #每批次50個樣本
n_batch = mnist.train.num_examples // batch_size #計算一共有多少個批次

#這裏的none表示第一個維度可以是任意的長度
x = tf.placeholder(tf.float32,[None,784])
#正確的標籤
y = tf.placeholder(tf.float32,[None,10])

#初始化權值
weights = tf.Variable(tf.truncated_normal([lstm_size,n_classes], stddev=0.1))
#初始化偏置值
biases = tf.Variable(tf.constant(0.1, shape=[n_classes]))

#定義RNN網絡
def RNN(X,weights,biases):
    # inputs=[batch_size, max_time, n_inputs]
    inputs = tf.reshape(X,[-1,max_time,n_inputs])
    #定義LSTM基本CELL
    lstm_cell = tf.contrib.rnn.BasicLSTMCell(lstm_size)
    # final_state[0]是cell_state
    # final_state[1]是hidden_state
    outputs,final_state = tf.nn.dynamic_rnn(lstm_cell,inputs,dtype=tf.float32)
    results = tf.nn.softmax(tf.matmul(final_state[1],weights) + biases)
    return results

#計算RNN的返回值
prediction = RNN(x, weights, biases)
#損失函數
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
#使用AdamOptimizer進行優化
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
#結果存放在一個布爾型列表中
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1)) #argmax返回一維張量中最大的值所在的位置
#求準確率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) #把correct_prediction變爲float32類型
#初始化
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    for epoch in range(6):
        for batch in range(n_batch):
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)
            sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
            
        acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
        print('Iter ' + str(epoch) + 'Testing Accuracy= ' + str(acc))

執行結果:

 

#作業

1、研究outputs和final_state一共有多少個維度,每個維度是什麼意思;

2、解釋下圖的Block的運行過程:

  • Block就是LSTM網絡中隱藏層的部分;
  • i 就是input,即輸入門;
  • f 就是forget,即遺忘門;
  • o 就是output,即輸出門;
  • W和U代表權值;
  • x代表輸入的數據;
  • h代表hidden state;
  • c就是cell state;
  • b就是偏置值;
  • t代表時間;
  • sigma代表激活函數,可能是雙曲正弦函數也可能是sigmoid函數;

 

#參考答案

1、直接查看函數,可以看到說明:

outputs: 3個維度

    If time_major == False (default), this will be a `Tensor` shaped:
      `[batch_size, max_time, cell.output_size]`.

    If time_major == True, this will be a `Tensor` shaped:
      `[max_time, batch_size, cell.output_size]`.

    Note, if `cell.output_size` is a (possibly nested) tuple of integers
    or `TensorShape` objects, then `outputs` will be a tuple having the
    same structure as `cell.output_size`, containing Tensors having shapes
    corresponding to the shape data in `cell.output_size`.
 state: The final state.  If `cell.state_size` is an int, this
    will be shaped `[batch_size, cell.state_size]`.  If it is a
    `TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
    If it is a (possibly nested) tuple of ints or `TensorShape`, this will
    be a tuple having the corresponding shapes. If cells are `LSTMCells`
    `state` will be a tuple containing a `LSTMStateTuple` for each cell.

2、

  • hidden state其實就是block的輸出;
  • ~Ct 指的是下圖中鼠標位置:

  • 虛線表示c_(t-1);
  • cell state是中間cell的信號。

 


PS.此爲學習《深度學習框架Tensorflow學習與應用》課程的筆記。【http://www.bilibili.com/video/av20542427/?share_source=copy_link&p=4&ts=1551709559&share_medium=iphone&bbid=7db773463cc4248e755f030556bc67d1】

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章