深度学习之tensorflow(六)

#RNN(Recurrent Neural Network 循环神经网络)

  • BP神经网络没有反馈回路,而RNN有。

RNN存在梯度消失的问题,随时间的流逝信号会不断地衰弱:

 

#LSTM(Long Short Term Memory)

  • 输出门:判断信号输出多少;
  • 输入门:判断信号能不能输入,如果信号有用就让它输入,如果没有用就让它变成0;
  • 忘记门:判断信号衰减程度;
  • 三个门都是经过训练的。

工作时的信号传递:

LSTM可以控制信号:

 

#LSTM在tensorflow中的实现

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

#载入数据集
mnist = input_data.read_data_sets('MNIST_data/',one_hot=True)

#输入图片是28*28
n_inputs = 28 #输入一行,一行有28个数据
max_time = 28 #一共28行
lstm_size = 100 #隐层单元
n_classes = 10 #10个分类
batch_size = 50 #每批次50个样本
n_batch = mnist.train.num_examples // batch_size #计算一共有多少个批次

#这里的none表示第一个维度可以是任意的长度
x = tf.placeholder(tf.float32,[None,784])
#正确的标签
y = tf.placeholder(tf.float32,[None,10])

#初始化权值
weights = tf.Variable(tf.truncated_normal([lstm_size,n_classes], stddev=0.1))
#初始化偏置值
biases = tf.Variable(tf.constant(0.1, shape=[n_classes]))

#定义RNN网络
def RNN(X,weights,biases):
    # inputs=[batch_size, max_time, n_inputs]
    inputs = tf.reshape(X,[-1,max_time,n_inputs])
    #定义LSTM基本CELL
    lstm_cell = tf.contrib.rnn.BasicLSTMCell(lstm_size)
    # final_state[0]是cell_state
    # final_state[1]是hidden_state
    outputs,final_state = tf.nn.dynamic_rnn(lstm_cell,inputs,dtype=tf.float32)
    results = tf.nn.softmax(tf.matmul(final_state[1],weights) + biases)
    return results

#计算RNN的返回值
prediction = RNN(x, weights, biases)
#损失函数
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
#使用AdamOptimizer进行优化
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
#结果存放在一个布尔型列表中
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1)) #argmax返回一维张量中最大的值所在的位置
#求准确率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) #把correct_prediction变为float32类型
#初始化
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    for epoch in range(6):
        for batch in range(n_batch):
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)
            sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
            
        acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
        print('Iter ' + str(epoch) + 'Testing Accuracy= ' + str(acc))

执行结果:

 

#作业

1、研究outputs和final_state一共有多少个维度,每个维度是什么意思;

2、解释下图的Block的运行过程:

  • Block就是LSTM网络中隐藏层的部分;
  • i 就是input,即输入门;
  • f 就是forget,即遗忘门;
  • o 就是output,即输出门;
  • W和U代表权值;
  • x代表输入的数据;
  • h代表hidden state;
  • c就是cell state;
  • b就是偏置值;
  • t代表时间;
  • sigma代表激活函数,可能是双曲正弦函数也可能是sigmoid函数;

 

#参考答案

1、直接查看函数,可以看到说明:

outputs: 3个维度

    If time_major == False (default), this will be a `Tensor` shaped:
      `[batch_size, max_time, cell.output_size]`.

    If time_major == True, this will be a `Tensor` shaped:
      `[max_time, batch_size, cell.output_size]`.

    Note, if `cell.output_size` is a (possibly nested) tuple of integers
    or `TensorShape` objects, then `outputs` will be a tuple having the
    same structure as `cell.output_size`, containing Tensors having shapes
    corresponding to the shape data in `cell.output_size`.
 state: The final state.  If `cell.state_size` is an int, this
    will be shaped `[batch_size, cell.state_size]`.  If it is a
    `TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
    If it is a (possibly nested) tuple of ints or `TensorShape`, this will
    be a tuple having the corresponding shapes. If cells are `LSTMCells`
    `state` will be a tuple containing a `LSTMStateTuple` for each cell.

2、

  • hidden state其实就是block的输出;
  • ~Ct 指的是下图中鼠标位置:

  • 虚线表示c_(t-1);
  • cell state是中间cell的信号。

 


PS.此为学习《深度学习框架Tensorflow学习与应用》课程的笔记。【http://www.bilibili.com/video/av20542427/?share_source=copy_link&p=4&ts=1551709559&share_medium=iphone&bbid=7db773463cc4248e755f030556bc67d1】

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章