Tensorflow實例:實現基於LSTM的語言模型

RNN

人每次思考時不會重頭開始,而是保留之前思考的一些結果爲現在的決策提供支持。例如我們對話時,我們會根據上下文的信息理解一句話的含義,而不是對每一句話重頭進行分析。傳統的神經網絡不能實現這個功能,這可能是其一大缺陷。例如卷積神經網絡雖然可以對圖像進行分類,但是可能無法對視頻中每一幀圖像發生的事情進行關聯分析,我們無法利用前一幀圖像的信息,而循環神經網絡則可以解決這個問題。

這裏寫圖片描述

如上圖所示,x是RNN的輸入,s是RNN的一個節點,而o是輸出。我們對這個RNN輸入數據x,然後通過網絡計算並得到輸出結果o,再將某些信息(state,狀態)傳入到網絡的輸入。我們將o與label進行比較可以得到誤差,有了這個誤差之後,就能使用梯度下降(Gradient Descent)和Back-Propagation Through Time(BPTT)方法對網絡進行訓練,BPTT與訓練前饋神經網絡的傳統BP方法類似,也是使用反向傳播求梯度並更新網絡參數權重。另外,還有一種方法叫Real-Time Recurrent Learning(RTRL),它可以正向求解梯度,不過其計算複雜度比較高。
RNN展開後,類似於有一系列輸入x和一系列輸出o的串聯的普通神經網絡,上一層的神經網絡會傳遞信息給下一層。這種串聯的結構天然就非常適合時間序列數據的處理和分析。需要注意的是,展開後的每一層級的神經網絡,其參數都是相同的,我們並不需要訓練成百上千層神經網絡的參數,只需要訓練一層RNN的參數。這就是它結構巧妙的地方,這裏共享參數的思想和卷積網絡中權值共享的方式也很類似。

LSTM

對於某些簡單的問題,可能只需要最後輸入的少量時序信息即可解決。但是對某些複雜問題,可能需要更早的一些信息,甚至是時間序列開頭的信息,但間隔太遠的輸入信息,RNN是難以記憶的,因此長程依賴(Long-term Dependencies)是傳統RNN的致命傷。
LSTM天生就是爲了解決長程依賴而設計的,不需要特別複雜地調試超參數,默認就可以記住長期的信息。

這裏寫圖片描述
LSTM的內部結構相比RNN更復雜,其中包含了4層神經網絡,其中小圈圈是point-wise的操作,比如向量加法、點乘等,而小矩陣則代表一層可學習參數的神經網絡。

  • LSTM單元上面的那條直線代表了LSTM的狀態state,它會貫穿所有串聯在一起的LSTM單元,從第一個LSTM單元一直流向最後一個LSTM單元,其中只有少量的線性干預和改變。
  • 狀態state在這條隧道中傳遞時,LSTM單元可以對其添加或刪除信息,這些對信息流的修改操作由LSTM中的Gates控制。
  • 這些Gates中包含了一個Sigmoid層和一個向量點乘的操作,這個Sigmoid層的輸出是0-1之間的值,它直接控制了信息傳遞的比例。
  • 每個LSTM單元中包含了3個這樣的Gates,用來維護和控制單元的狀態信息。憑藉對狀態信息的存儲和修改,LSTM單元就可以實現長程記憶。

Tensorflow實現LSTM

下面我們就使用LSTM來實現一個語言模型,給定上文的語境,即歷史出現的單詞,語言模型可以預測下一個單詞出現的概率,使用的數據集:PTB

#%%
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================


import time
import numpy as np
import tensorflow as tf
import reader

#flags = tf.flags
#logging = tf.logging



#flags.DEFINE_string("save_path", None,
#                    "Model output directory.")
#flags.DEFINE_bool("use_fp16", False,
#                  "Train using 16-bit floats instead of 32bit floats")

#FLAGS = flags.FLAGS


#def data_type():
#  return tf.float16 if FLAGS.use_fp16 else tf.float32


class PTBInput(object):
  """The input data."""

  def __init__(self, config, data, name=None):
    self.batch_size = batch_size = config.batch_size
    self.num_steps = num_steps = config.num_steps
    self.epoch_size = ((len(data) // batch_size) - 1) // num_steps
    self.input_data, self.targets = reader.ptb_producer(
        data, batch_size, num_steps, name=name)


class PTBModel(object):
  """The PTB model."""

  def __init__(self, is_training, config, input_):
    self._input = input_

    batch_size = input_.batch_size
    num_steps = input_.num_steps
    size = config.hidden_size
    vocab_size = config.vocab_size

    # Slightly better results can be obtained with forget gate biases
    # initialized to 1 but the hyperparameters of the model would need to be
    # different than reported in the paper.
    def lstm_cell():
      return tf.contrib.rnn.BasicLSTMCell(
          size, forget_bias=0.0, state_is_tuple=True)
    attn_cell = lstm_cell
    if is_training and config.keep_prob < 1:
      def attn_cell():
        return tf.contrib.rnn.DropoutWrapper(
            lstm_cell(), output_keep_prob=config.keep_prob)
    cell = tf.contrib.rnn.MultiRNNCell(
        [attn_cell() for _ in range(config.num_layers)], state_is_tuple=True)

    self._initial_state = cell.zero_state(batch_size, tf.float32)

    with tf.device("/cpu:0"):
      embedding = tf.get_variable(
          "embedding", [vocab_size, size], dtype=tf.float32)
      inputs = tf.nn.embedding_lookup(embedding, input_.input_data)

    if is_training and config.keep_prob < 1:
      inputs = tf.nn.dropout(inputs, config.keep_prob)

    # Simplified version of models/tutorials/rnn/rnn.py's rnn().
    # This builds an unrolled LSTM for tutorial purposes only.
    # In general, use the rnn() or state_saving_rnn() from rnn.py.
    #
    # The alternative version of the code below is:
    #
    # inputs = tf.unstack(inputs, num=num_steps, axis=1)
    # outputs, state = tf.nn.rnn(cell, inputs,
    #                            initial_state=self._initial_state)
    outputs = []
    state = self._initial_state
    with tf.variable_scope("RNN"):
      for time_step in range(num_steps):
        if time_step > 0: tf.get_variable_scope().reuse_variables()
        (cell_output, state) = cell(inputs[:, time_step, :], state)
        outputs.append(cell_output)

    output = tf.reshape(tf.concat(outputs, 1), [-1, size])
    softmax_w = tf.get_variable(
        "softmax_w", [size, vocab_size], dtype=tf.float32)
    softmax_b = tf.get_variable("softmax_b", [vocab_size], dtype=tf.float32)
    logits = tf.matmul(output, softmax_w) + softmax_b
    loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
        [logits],
        [tf.reshape(input_.targets, [-1])],
        [tf.ones([batch_size * num_steps], dtype=tf.float32)])
    self._cost = cost = tf.reduce_sum(loss) / batch_size
    self._final_state = state

    if not is_training:
      return

    self._lr = tf.Variable(0.0, trainable=False)
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),
                                      config.max_grad_norm)
    optimizer = tf.train.GradientDescentOptimizer(self._lr)
    self._train_op = optimizer.apply_gradients(
        zip(grads, tvars),
        global_step=tf.contrib.framework.get_or_create_global_step())

    self._new_lr = tf.placeholder(
        tf.float32, shape=[], name="new_learning_rate")
    self._lr_update = tf.assign(self._lr, self._new_lr)

  def assign_lr(self, session, lr_value):
    session.run(self._lr_update, feed_dict={self._new_lr: lr_value})

  @property
  def input(self):
    return self._input

  @property
  def initial_state(self):
    return self._initial_state

  @property
  def cost(self):
    return self._cost

  @property
  def final_state(self):
    return self._final_state

  @property
  def lr(self):
    return self._lr

  @property
  def train_op(self):
    return self._train_op


class SmallConfig(object):
  """Small config."""
  init_scale = 0.1
  learning_rate = 1.0
  max_grad_norm = 5
  num_layers = 2
  num_steps = 20
  hidden_size = 200
  max_epoch = 4
  max_max_epoch = 13
  keep_prob = 1.0
  lr_decay = 0.5
  batch_size = 20
  vocab_size = 10000


class MediumConfig(object):
  """Medium config."""
  init_scale = 0.05
  learning_rate = 1.0
  max_grad_norm = 5
  num_layers = 2
  num_steps = 35
  hidden_size = 650
  max_epoch = 6
  max_max_epoch = 39
  keep_prob = 0.5
  lr_decay = 0.8
  batch_size = 20
  vocab_size = 10000


class LargeConfig(object):
  """Large config."""
  init_scale = 0.04
  learning_rate = 1.0
  max_grad_norm = 10
  num_layers = 2
  num_steps = 35
  hidden_size = 1500
  max_epoch = 14
  max_max_epoch = 55
  keep_prob = 0.35
  lr_decay = 1 / 1.15
  batch_size = 20
  vocab_size = 10000


class TestConfig(object):
  """Tiny config, for testing."""
  init_scale = 0.1
  learning_rate = 1.0
  max_grad_norm = 1
  num_layers = 1
  num_steps = 2
  hidden_size = 2
  max_epoch = 1
  max_max_epoch = 1
  keep_prob = 1.0
  lr_decay = 0.5
  batch_size = 20
  vocab_size = 10000


def run_epoch(session, model, eval_op=None, verbose=False):
  """Runs the model on the given data."""
  start_time = time.time()
  costs = 0.0
  iters = 0
  state = session.run(model.initial_state)

  fetches = {
      "cost": model.cost,
      "final_state": model.final_state,
  }
  if eval_op is not None:
    fetches["eval_op"] = eval_op

  for step in range(model.input.epoch_size):
    feed_dict = {}
    for i, (c, h) in enumerate(model.initial_state):
      feed_dict[c] = state[i].c
      feed_dict[h] = state[i].h

    vals = session.run(fetches, feed_dict)
    cost = vals["cost"]
    state = vals["final_state"]

    costs += cost
    iters += model.input.num_steps

    if verbose and step % (model.input.epoch_size // 10) == 10:
      print("%.3f perplexity: %.3f speed: %.0f wps" %
            (step * 1.0 / model.input.epoch_size, np.exp(costs / iters),
             iters * model.input.batch_size / (time.time() - start_time)))

  return np.exp(costs / iters)




raw_data = reader.ptb_raw_data('simple-examples/data/')
train_data, valid_data, test_data, _ = raw_data

config = SmallConfig()
eval_config = SmallConfig()
eval_config.batch_size = 1
eval_config.num_steps = 1

with tf.Graph().as_default():
  initializer = tf.random_uniform_initializer(-config.init_scale,
                                              config.init_scale)

  with tf.name_scope("Train"):
    train_input = PTBInput(config=config, data=train_data, name="TrainInput")
    with tf.variable_scope("Model", reuse=None, initializer=initializer):
      m = PTBModel(is_training=True, config=config, input_=train_input)
      #tf.scalar_summary("Training Loss", m.cost)
      #tf.scalar_summary("Learning Rate", m.lr)

  with tf.name_scope("Valid"):
    valid_input = PTBInput(config=config, data=valid_data, name="ValidInput")
    with tf.variable_scope("Model", reuse=True, initializer=initializer):
      mvalid = PTBModel(is_training=False, config=config, input_=valid_input)
      #tf.scalar_summary("Validation Loss", mvalid.cost)

  with tf.name_scope("Test"):
    test_input = PTBInput(config=eval_config, data=test_data, name="TestInput")
    with tf.variable_scope("Model", reuse=True, initializer=initializer):
      mtest = PTBModel(is_training=False, config=eval_config,
                       input_=test_input)

  sv = tf.train.Supervisor()
  with sv.managed_session() as session:
    for i in range(config.max_max_epoch):
      lr_decay = config.lr_decay ** max(i + 1 - config.max_epoch, 0.0)
      m.assign_lr(session, config.learning_rate * lr_decay)

      print("Epoch: %d Learning rate: %.3f" % (i + 1, session.run(m.lr)))
      train_perplexity = run_epoch(session, m, eval_op=m.train_op,
                                   verbose=True)
      print("Epoch: %d Train Perplexity: %.3f" % (i + 1, train_perplexity))
      valid_perplexity = run_epoch(session, mvalid)
      print("Epoch: %d Valid Perplexity: %.3f" % (i + 1, valid_perplexity))

    test_perplexity = run_epoch(session, mtest)
    print("Test Perplexity: %.3f" % test_perplexity)

     # if FLAGS.save_path:
     #   print("Saving model to %s." % FLAGS.save_path)
     #   sv.saver.save(session, FLAGS.save_path, global_step=sv.global_step)

#if __name__ == "__main__":
#  tf.app.run()
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章