基於RNN實現古詩詞生成模型

我們知道,RNN(循環神經網絡)模型是基於當前的狀態和當前的輸入來對下一時刻做出預判。而LSTM(長短時記憶網絡)模型則可以記憶距離當前位置較遠的上下文信息。
在此,我們根據上述預判模型來進行 古詩詞的生成模型訓練。
首先,我們需要準備好古詩詞的數據集:全唐詩共34646首,我把數據文件上傳到了我的csdn中,又需要的可以下載
http://download.csdn.net/download/qq_34470213/10150761

訓練模型

1、獲取字典

  • 我們首先需要讀取詩集,把詩集的每首詩都分離出來存入列表,根據列表的長度 就可以得出共有多少首古詩。

首先需要把每首詩讀出來,故可以使用open函數。

由於在數據文件中每首詩的格式都是( 題目:內容 ),所以可以先使用strip函數去掉空格,再使用split(“:”)來分割題目和內容,由於我們在這裏只需要使用詩的內容,所以只保存內容即可。

得到了詩點的內容,需要注意的是有些詩句的題目中也會含有“:”符號,我們需要把這樣的句子省略掉,因爲它不是詩詞內容。
得到了所有的詩詞內容。

爲了標記詩詞的開始和結尾,我們在開頭加上字符“[”,末尾加上字符“]”,在訓練的時候程序也會根據該符號來作爲訓練的始末狀態。
把所有的唐詩內容都加入到列表中,列表長度即爲唐詩的總數。

代碼實現:

poetrys = []
with open(poetry_file, "r", encoding='utf-8', ) as f:
    for line in f:
        try:
            title, content = line.strip().split(':')
            content = content.replace(' ', '')
            if '_' in content or '(' in content or '(' in content or '《' in content or '[' in content:
                continue
            if len(content) < 5 or len(content) > 79:
                continue
            content = '[' + content + ']'
            poetrys.append(content)
        except Exception as e:
            pass

poetrys = sorted(poetrys, key=lambda line: len(line))
print('唐詩總數: ', len(poetrys))
  • 得到所有唐詩內容以後,就可以對每個字進行編碼了,由此得到所有詩的編碼形式,把編碼放入神經網絡進行訓練。

則需要把所有的詩詞中所有出現過的字都進行統計,統計其出現過的次數,使用collection.Counter對一個列表中的每個元素都進行遍歷統計,返回值爲一個元素和出現次數相對應的字典。

我們取有訓練必要的數據進行編碼,首先根據字典中的出現次數以由高到低的順序進行排序,可以使用sorted函數,key表示排序方法,k=lambda x:x[1],表示根據 第二個參數(即出現次數)的大小從大到小排序,設置爲-x[1]排序後則是從大到小。

取出需要編碼的字,按照從0開始的編碼格式,對每個字進行編碼,排序後我們得到了具有每個字和其出現次數的元組,我們只需要拿到每個字即可。
zip([1,2],[3,4],[5,6])
-- 》 [1,3,5],[2,4,6]
zip(*[(1,2),(3,4),(5,6)])
--》[1,3,5], [2,4,6]

選擇出現次數多的字進行編碼,作爲編碼字典。把每個字與從0到len的數字編碼字典
dict(d):創建一個字典。d 必須是一個序列 (key,value)元組
最後得到每個字與從0開始的字符組成的字典

把每首詩的每個字都進行編碼處理,即從字典中找到每個字對應的號碼
dict.get(key, default=None)
key -- 字典中要查找的鍵。
default -- 如果指定鍵的值不存在時,返回該默認值。

代碼實現

all_words = []
for poetry in poetrys:
    all_words += [word for word in poetry]
counter = collections.Counter(all_words)

count_pairs = sorted(counter.items(), key=lambda x: -x[1])
words, _ = zip(*count_pairs)
leng = int(len(words)*0.9)
words = words[:leng]+(' ',)

word_num_map = dict(zip(words, range(len(words))))
to_num = lambda word: word_num_map.get(word, len(words))

poetrys_vector = [list(map(to_num, poetry)) for poetry in poetrys]

  • 訓練數據

訓練時每次取64首詩進行訓練,即每次在列表內取64個數據,然後對其進行輸出數據x,輸出數據y進行賦值,y爲正確的結果,用於訓練。(需注意的是,由於模型的作用是對下一個字進行預測,所以y只是x的數據向前移動一個字)
定義一個RNN模型,然後把數據代入進行訓練,使用RNN進行訓練的過程大約分爲:
1、定義模型和結構。
2、0初始化當前狀態。
3、輸入數據進行ID到單詞向量的轉化。
4、輸入數據和初始化狀態代入模型進行訓練,得到訓練結果。
5、對訓練結果加入一個全連接層得到最終輸出。
多次訓練,得到最終的狀態和最終的損失。在本例中,共規定了50次訓練,每次訓練都對每個batche數據進行訓練,由於共有34646首詩,每個batche的大小爲64,所以共有541個batche

 for epoch in range(50):
            for batche in range(541):
                    train(epoch, batche)

由於最後的輸出數據是下一個字,所以輸出格式的大小爲該字可能對應的編碼,輸出大小爲len。

爲了防止中斷,及時保存。

生成古詩:
使用以上訓練好的網絡模型來生成新的古詩,生成古詩的主要方法有:
讀取模板文件,對每個字的出現個數都進行統計,根據統計結果取出數據來進行編碼,得到每個字和相應的編碼字典。用於字和編碼之間的轉化。
生成RNN模型網絡,應用於根據輸入信息得到相應的輸出信息。與訓練模型的編寫方法相同。
讀取已保存的網絡模型,根據已經訓練好的模型來進行新的數據預測。
使用循環語句進行編碼和字之間的轉化,直到一首詩做完後退出。

訓練數據的總代碼:

import collections
import numpy as np
from tensorflow.contrib.legacy_seq2seq.python.ops.seq2seq import sequence_loss_by_example
import tensorflow as tf
import os

MODEL_SAVE_PATH = "./save/"
MODEL_NAME = "poetry.module"

# -------------------------------數據預處理---------------------------#

poetry_file = 'poetry.txt'

# 詩集
poetrys = []
with open(poetry_file, "r", encoding='utf-8', ) as f:
    for line in f:
        try:
            title, content = line.strip().split(':')
            content = content.replace(' ', '')
            if '_' in content or '(' in content or '(' in content or '《' in content or '[' in content:
                continue
            if len(content) < 5 or len(content) > 79:
                continue
            content = '[' + content + ']'
            poetrys.append(content)
        except Exception as e:
            pass

poetrys = sorted(poetrys, key=lambda line: len(line))
print('唐詩總數: ', len(poetrys))

all_words = []
for poetry in poetrys:
    all_words += [word for word in poetry]
counter = collections.Counter(all_words)
print(counter)
count_pairs = sorted(counter.items(), key=lambda x: -x[1])
print(count_pairs)
words, _ = zip(*count_pairs)
print(words)
print(len(words))
leng = int(len(words)*0.9)

words = words[:leng]+(' ',)
print(words)

word_num_map = dict(zip(words, range(len(words))))

to_num = lambda word: word_num_map.get(word, len(words))
poetrys_vector = [list(map(to_num, poetry)) for poetry in poetrys]
# [[314, 3199, 367, 1556, 26, 179, 680, 0, 3199, 41, 506, 40, 151, 4, 98, 1],
# [339, 3, 133, 31, 302, 653, 512, 0, 37, 148, 294, 25, 54, 833, 3, 1, 965, 1315, 377, 1700, 562, 21, 37, 0, 2, 1253, 21, 36, 264, 877, 809, 1]
# ....]

# 每次取64首詩進行訓練
batch_size = 64
n_chunk = len(poetrys_vector) // batch_size
x_batches = []
y_batches = []

for i in range(n_chunk):
    start_index = i * batch_size
    end_index = start_index + batch_size

    batches = poetrys_vector[start_index:end_index]
    length = max(map(len, batches))
    xdata = np.full((batch_size, length), word_num_map[' '], np.int32)
    for row in range(batch_size):
        xdata[row, :len(batches[row])] = batches[row]
    ydata = np.copy(xdata)
    ydata[:, :-1] = xdata[:, 1:]
    """
    xdata             ydata
    [6,2,4,6,9]       [2,4,6,9,9]
    [1,4,2,8,5]       [4,2,8,5,5]
    """
    x_batches.append(xdata)
    y_batches.append(ydata)

# ---------------------------------------RNN--------------------------------------#

input_data = tf.placeholder(tf.int32, [batch_size, None])
output_targets = tf.placeholder(tf.int32, [batch_size, None])


# 定義RNN
def neural_network(model='lstm', rnn_size=128, num_layers=2):
    if model == 'rnn':
        cell_fun = tf.nn.rnn_cell.BasicRNNCell
    elif model == 'gru':
        cell_fun = tf.nn.rnn_cell.GRUCell
    elif model == 'lstm':
        cell_fun = tf.nn.rnn_cell.BasicLSTMCell

    cell = cell_fun(rnn_size, state_is_tuple=True)
    cell = tf.nn.rnn_cell.MultiRNNCell([cell] * num_layers, state_is_tuple=True)

    initial_state = cell.zero_state(batch_size, tf.float32)

    with tf.variable_scope('rnnlm'):
        softmax_w = tf.get_variable("softmax_w", [rnn_size, len(words) + 1])
        softmax_b = tf.get_variable("softmax_b", [len(words) + 1])
        with tf.device("/cpu:0"):
            embedding = tf.get_variable("embedding", [len(words) + 1, rnn_size])
            inputs = tf.nn.embedding_lookup(embedding, input_data)

    outputs, last_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state, scope='rnnlm')
    output = tf.reshape(outputs, [-1, rnn_size])

    logits = tf.matmul(output, softmax_w) + softmax_b
    probs = tf.nn.softmax(logits)
    return logits, last_state, probs, cell, initial_state


# 訓練
def train_neural_network():
    logits, last_state, _, _, _ = neural_network()
    targets = tf.reshape(output_targets, [-1])
    loss = sequence_loss_by_example([logits], [targets], [tf.ones_like(targets, dtype=tf.float32)], len(words))
    cost = tf.reduce_mean(loss)
    learning_rate = tf.Variable(0.0, trainable=False)
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), 5)
    optimizer = tf.train.AdamOptimizer(learning_rate)
    train_op = optimizer.apply_gradients(zip(grads, tvars))

    with tf.Session() as sess:
        sess.run(tf.initialize_all_variables())
        # saver = tf.train.Saver()
        for epoch in range(50):
            sess.run(tf.assign(learning_rate, 0.002 * (0.97 ** epoch)))
            n = 0
            for batche in range(n_chunk):
                train_loss, _, _ = sess.run([cost, last_state, train_op],
                                            feed_dict={input_data: x_batches[n], output_targets: y_batches[n]})
                n += 1
                print(epoch, batche, train_loss)
                if epoch % 7 == 0:
                     saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=epoch)

train_neural_network()

訓練結束後得到儲存神經網絡模型的文件:

 

我的筆記本上訓練了十個多小時,如果不想訓練,可以直接下載我訓練好的文件來使用,可以得到同樣的效果。
我把訓練的最後結果放到了這裏,鏈接:https://pan.baidu.com/s/1bIibbo 密碼:ojs3

使用模型生成詩句

使用模型時首先應該加載出該模型使我們方便使用。
已知一首詩的開始標誌字爲"[",設其初始狀態爲0,由此開始載入模型,迭代可以求得整首古詩,古詩的結束標誌爲"]",出現了此輸出結果表示古詩生成完畢,退出循環,打印結果。

import collections
import numpy as np
import tensorflow as tf

#-------------------------------數據預處理---------------------------#

poetry_file ='poetry.txt'

# 詩集
poetrys = []
with open(poetry_file, "r", encoding='utf-8',) as f:
    for line in f:
        try:
            title, content = line.strip().split(':')
            content = content.replace(' ','')
            if '_' in content or '(' in content or '(' in content or '《' in content or '[' in content:
                continue
            if len(content) < 5 or len(content) > 79:
                continue
            content = '[' + content + ']'
            poetrys.append(content)
        except Exception as e:
            pass

poetrys = sorted(poetrys,key=lambda line: len(line))
print('唐詩總數: ', len(poetrys))

all_words = []
for poetry in poetrys:
    all_words += [word for word in poetry]
counter = collections.Counter(all_words)
count_pairs = sorted(counter.items(), key=lambda x: -x[1])
words, _ = zip(*count_pairs)

words = words[:len(words)] + (' ',)
word_num_map = dict(zip(words, range(len(words))))
to_num = lambda word: word_num_map.get(word, len(words))
poetrys_vector = [ list(map(to_num, poetry)) for poetry in poetrys]
#[[314, 3199, 367, 1556, 26, 179, 680, 0, 3199, 41, 506, 40, 151, 4, 98, 1],
#[339, 3, 133, 31, 302, 653, 512, 0, 37, 148, 294, 25, 54, 833, 3, 1, 965, 1315, 377, 1700, 562, 21, 37, 0, 2, 1253, 21, 36, 264, 877, 809, 1]
#....]

batch_size = 1
n_chunk = len(poetrys_vector) // batch_size
x_batches = []
y_batches = []
for i in range(n_chunk):
    start_index = i * batch_size
    end_index = start_index + batch_size

    batches = poetrys_vector[start_index:end_index]
    length = max(map(len,batches))
    xdata = np.full((batch_size,length), word_num_map[' '], np.int32)
    for row in range(batch_size):
        xdata[row,:len(batches[row])] = batches[row]
    ydata = np.copy(xdata)
    ydata[:,:-1] = xdata[:,1:]
    """
    xdata             ydata
    [6,2,4,6,9]       [2,4,6,9,9]
    [1,4,2,8,5]       [4,2,8,5,5]
    """
    x_batches.append(xdata)
    y_batches.append(ydata)


#---------------------------------------RNN--------------------------------------#

input_data = tf.placeholder(tf.int32, [batch_size, None])
output_targets = tf.placeholder(tf.int32, [batch_size, None])
# 定義RNN
def neural_network(model='lstm', rnn_size=128, num_layers=2):
    if model == 'rnn':
        cell_fun = tf.nn.rnn_cell.BasicRNNCell
    elif model == 'gru':
        cell_fun = tf.nn.rnn_cell.GRUCell
    elif model == 'lstm':
        cell_fun = tf.nn.rnn_cell.BasicLSTMCell

    cell = cell_fun(rnn_size, state_is_tuple=True)
    cell = tf.nn.rnn_cell.MultiRNNCell([cell] * num_layers, state_is_tuple=True)

    initial_state = cell.zero_state(batch_size, tf.float32)

    with tf.variable_scope('rnnlm'):
        softmax_w = tf.get_variable("softmax_w", [rnn_size, len(words)+1])
        softmax_b = tf.get_variable("softmax_b", [len(words)+1])
        with tf.device("/cpu:0"):
            embedding = tf.get_variable("embedding", [len(words)+1, rnn_size])
            inputs = tf.nn.embedding_lookup(embedding, input_data)

    outputs, last_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state, scope='rnnlm')
    output = tf.reshape(outputs,[-1, rnn_size])

    logits = tf.matmul(output, softmax_w) + softmax_b
    probs = tf.nn.softmax(logits)
    return logits, last_state, probs, cell, initial_state

#-------------------------------生成古詩---------------------------------#
# 使用訓練完成的模型

def gen_poetry():
    def to_word(weights):
        t = np.cumsum(weights)
        s = np.sum(weights)
        sample = int(np.searchsorted(t, np.random.rand(1)*s))
        return words[sample]

    _, last_state, probs, cell, initial_state = neural_network()

    with tf.Session() as sess:
        sess.run(tf.initialize_all_variables())

        saver = tf.train.Saver(tf.all_variables())
        saver.restore(sess, './save/poetry.module-49')

        state_ = sess.run(cell.zero_state(1, tf.float32))

        x = np.array([list(map(word_num_map.get, '['))])
        [probs_, state_] = sess.run([probs, last_state], feed_dict={input_data: x, initial_state: state_})
        word = to_word(probs_)
        
        poem = ''
        word_biao = word
        while word != ']':
            poem += word_biao
            x = np.zeros((1,1))
            x[0,0] = word_num_map[word]
            [probs_, state_] = sess.run([probs, last_state], feed_dict={input_data: x, initial_state: state_})
            word = to_word(probs_)
            word_biao =word
            if word_biao == '。':
                word_biao = '。\n'
            print(word_biao)
        
      return poem

print(gen_poetry())

輸出結果:

 

藏頭詩的寫作

藏頭詩與自由作詩的區別在於,需要指定每句話的頭一個字,所以初始狀態便需要重新設定爲給定的字,我們設置一個for循環來取出藏頭句子的每
一個單字,對該單字進行訓練。
我們把第一個字設置爲"[",求出狀態state_,然後將該狀態代入該單字中求下一個字的解。即,已知當前輸入爲"word",當前狀態是“[”的狀態state_,求輸出和下一步狀態。
輸出作爲當前輸入,下一步狀態作爲當前狀態,再求下一個字。
直到詩句滿足字數狀態或結束,則退出循環,處理下一個單字。

def gen_poetry_with_head_and_type(head, type):
    if type != 5 and type != 7:
        print('The second para has to be 5 or 7!')
        return

    def to_word(weights):
        t = np.cumsum(weights)
        s = np.sum(weights)
        sample = int(np.searchsorted(t, np.random.rand(1)*s))
        return words[sample]

    _, last_state, probs, cell, initial_state = neural_network()

    with tf.Session() as sess:
        sess.run(tf.initialize_all_variables())
        saver = tf.train.Saver()
        saver.restore(sess, './save/poetry.module-35')
        poem = ''
        i = 0

        for the_word in head:
                flag = True
                while flag:
                    state_ = sess.run(cell.zero_state(1, tf.float32))
                    x = np.array([list(map(word_num_map.get, '['))])
                    [probs_, state_] = sess.run([probs, last_state], feed_dict={input_data: x, initial_state: state_})

                    sentence = the_word
                    x = np.zeros((1, 1))
                    x[0, 0] = word_num_map[sentence]
                    [probs_, state_] = sess.run([probs, last_state], feed_dict={input_data: x, initial_state: state_})

                    word = to_word(probs_)
                    sentence += word

                    while word!='。':
                        x = np.zeros((1, 1))
                        x[0, 0] = word_num_map[word]
                        [probs_, state_] = sess.run([probs, last_state], feed_dict={input_data: x, initial_state: state_})
                        word = to_word(probs_)

                        sentence += word

                        if len(sentence) == 2 + 2 * type:
                            sentence += '\n'
                            poem += sentence
                            flag = False

        return poem

print(gen_poetry_with_head_and_type("碧影江白", 7))

經過處理後輸出詩句:

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章