教你幾招搞定 LSTMS 的獨門絕技—理解與代碼校正

雷鋒網翻譯了一篇技術博客(標題:Taming LSTMs: Variable-sized mini-batches and why PyTorch is good for your health,作者:William Falcon, 翻譯 | 趙朋飛  馬力羣  塗世文   整理 | MY),該博客主要講述了pytorch下輸入爲變長序列的LSTM的處理方式,文中所舉案例簡單,描述詳盡,語言通俗易懂,可作爲變長序列處理的最好的入門案例。然而,不知何故,代碼中存在很多錯誤,使得其作爲入門教材的效果大打折扣。另一方面,該博客沒有輸出一些關鍵方法的結果,加大了初學者的理解難度。因此,本文以翻譯稿爲主要藍本,結合自己的理解,給出本人改正後的代碼及關鍵中間結果,以期能夠更好地理解和處理變長序列。

如果你用過 PyTorch 進行深度學習研究和實驗的話,你可能經歷過欣喜愉悅、能量爆棚的體驗,甚至有點像是走在陽光下,感覺生活竟然如此美好 。但是直到你試着用 PyTorch 實現可變大小的 mini-batch RNNs 的時候,瞬間一切又回到瞭解放前。

不怕,我們還是有希望的。讀完這篇文章,你又會找回那種感覺,你和 PyTorch步入陽光中,此時你的循環神經網絡模型的準確率又創新高,而這種準確率你只在 Arxiv 上讀到過。真讓人覺得興奮!

我們將告訴你幾個獨門絕技:

1.如何在PyTorch中採用 mini-batch 中的可變大小序列實現 LSTM 。

2. PyTorch中pack_padded_sequence 和 pad_packed_sequence 的原理和作用。

3.在基於時間維度的反向傳播算法中屏蔽(Mask Out)用於填充的符號。

Tips文本填充,使所有文本長度相等;pack_padded_sequence , 運行LSTM;使用pad_packed_sequence;扁平化所有輸出和標籤, 屏蔽填充輸出, 計算交叉熵損失函數(Cross-Entropy)。

爲何知其難而爲之?

當然是速度和性能啦。

將可變長度元素同時輸入到 LSTM 曾經可是一個艱鉅的技術挑戰,不過像 PyTorch 這樣的框架已經基本解決了( Tensorflow 也有一個很好的解決方案,但它看起來非常非常複雜)。

此外,文檔也沒有很清楚的解釋,用例也很老舊。正確的做法是使用來自多個示樣本的梯度,而不是僅僅來自一個樣本。這將加快訓練速度,提高梯度下降的準確性 。

儘管 RNNs 很難並行化,因爲每一步都依賴於上一步,但是使用 mini-batch 在速度上將會使其得到很大的提升。

序列標註

先來嘗試一個簡單的序列標註問題,在這裏我們會創建一個 LSTM/GRU 模型 對賈斯汀·比伯的歌詞做詞性標註。譬如:“is it too late now to say sorry?” (移除 ’to’ 和 ’?’  )。

數據格式化

在實際情況中你會做大量的格式化處理,但在這裏由於篇幅限制我們不會這樣做。爲簡單起見,讓我們用不同長度的序列來製作這組人造數據。

sent_1_x = ['is', 'it', 'too', 'late', 'now', 'say', 'sorry']
sent_1_y = ['VB', 'PRP', 'RB', 'RB', 'RB', 'VB', 'JJ']
sent_2_x = ['ooh', 'ooh']
sent_2_y = ['NNP', 'NNP']
sent_3_x = ['sorry', 'yeah']
sent_3_y = ['JJ', 'NNP']
X = [sent_1_x, sent_2_x, sent_3_x]
Y = [sent_1_y, sent_2_y, sent_3_y]

當我們將每個句子輸入到嵌入層(Embedding Layer)的時候,每個單詞(word)將會映射(mapping)到一個索引(index),所以我們需要將他們轉換成整數列表(list)。

這裏我們將這些句子映射到相應的詞彙表(V)索引。

# map sentences to vocab
vocab = {'<PAD>': 0, 'is': 1, 'it': 2, 'too': 3, 'late': 4, 'now': 5, 'say': 6, 'sorry': 7, 'ooh': 8, 'yeah': 9}
# fancy nested list comprehension
X =  [[vocab[word] for word in sentence] for sentence in X]
print(X)
[[1, 2, 3, 4, 5, 6, 7], [8, 8], [7, 9]]

對於分類標籤也是一樣的(在我們的例子中是 POS 標記),這些不會嵌入

tags = {'<PAD>': 0, 'VB': 1, 'PRP': 2, 'RB': 3, 'JJ': 4, 'NNP': 5}
# fancy nested list comprehension
Y =  [[tags[tag] for tag in sentence] for sentence in Y]
print(Y)
[[1, 2, 3, 3, 3, 1, 4], [5, 5], [4, 5]]

技巧1:利用填充(Padding)使 mini-batch 中中所有的序列具有相同的長度。

在模型裏有着不同長度的是什麼?當然不會是我們的每批數據!

利用 PyTorch 處理時,在填充之前,我們需要保存每個序列的長度。我們需要利用這些信息去掩蓋(mask out)損失函數,使其不對填充元素進行計算。

import numpy as np

# get the length of each sentence
X_lengths = [len(sentence) for sentence in X]
# create an empty matrix with padding tokens
pad_token = vocab['<PAD>']
longest_sent = max(X_lengths)
batch_size = len(X)
padded_X = np.ones((batch_size, longest_sent)) * pad_token
# copy over the actual sequences
for i, x_len in enumerate(X_lengths):
    sequence = X[i]
    padded_X[i, 0:x_len] = sequence[:x_len]
print(padded_X)
[[1. 2. 3. 4. 5. 6. 7.]
 [8. 8. 0. 0. 0. 0. 0.]
 [7. 9. 0. 0. 0. 0. 0.]]

我們用同樣的方法處理標籤

Y_lengths = [len(sentence) for sentence in Y]
# create an empty matrix with padding tokens
pad_token = tags['<PAD>']
longest_sent = max(Y_lengths)
batch_size = len(Y)
padded_Y = np.ones((batch_size, longest_sent)) * pad_token
# copy over the actual sequences
for i, y_len in enumerate(Y_lengths):
    sequence = Y[i]
    padded_Y[i, 0:y_len] = sequence[:y_len]
print(padded_Y)
[[1. 2. 3. 3. 3. 1. 4.]
 [5. 5. 0. 0. 0. 0. 0.]
 [4. 5. 0. 0. 0. 0. 0.]]

數據處理總結:

我們將這些元素轉換成索引序列並通過加入 0 元素對每個序列進行填充(Zero Padding),這樣每批數據就可以擁有相同的長度。

構建模型

藉助 PyTorch 我們可以搭建一個非常簡單的 LSTM 網絡。模型的層結構如下:

1. 詞嵌入層(Embedding Layer)

2. LSTM 層

3. 線性全連接層

4. Softmax 層

import torch
import torch.nn as nn
import torch.nn.functional as F

nb_tags = len(tags) - 1
nb_vocab_words = len(vocab)
batch_size, seq_len=padded_X.shape
embedding_dim=3
nb_lstm_units=10
nb_layers=2

padding_idx = vocab['<PAD>']
word_embedding = nn.Embedding(
    num_embeddings=nb_vocab_words,
    embedding_dim=embedding_dim,
    padding_idx=padding_idx
)

# design LSTM
lstm = nn.LSTM(
    input_size=embedding_dim,
    hidden_size=nb_lstm_units,
    num_layers=nb_layers,
    batch_first=True
)
       
# output layer which projects back to tag space
hidden_to_tag = nn.Linear(nb_lstm_units, nb_tags)

hidden_a = torch.randn(nb_layers, batch_size, nb_lstm_units).float()
hidden_b = torch.randn(nb_layers, batch_size, nb_lstm_units).float()

技巧2:使用 PyTorch 中的 pack_padded_sequence 和 pad_packed_sequence API

再次重申一下,現在我們輸入的一批數據中的每組數據均已被填充爲相同長度。

在前向傳播中,我們將:

1. 對序列進行詞嵌入(Word Embedding)操作

2. 使用 pack_padded_sequence 來確保 LSTM 模型不會處理用於填充的元素。

3. 在 LSTM 上運行 packed_batch

4. 使用 pad_packed_sequence 解包(unpack)pack_padded_sequence 操作後的序列

5. 對 LSTM 的輸出進行變換,從而可以被輸入到線性全連接層中

6. 再通過對序列計算 log_softmax

7. 最後將數據維度轉換回來,最終的數據維度爲 (batch_size, seq_len, nb_tags)

# 1. embed the input
# Dim transformation: (batch_size, seq_len) -> (batch_size, seq_len, embedding_dim)
X = torch.tensor(padded_X).long()
X = word_embedding(X)
print('embedded', X)
embedded tensor([[[ 1.3190,  0.0872, -0.2742],
         [-0.1677,  1.1510,  0.4656],
         [-0.8435, -0.5562,  0.9256],
         [-0.8396, -0.0076, -1.5482],
         [-0.5656, -1.3909, -0.7842],
         [-0.5416,  1.7457,  0.4726],
         [ 1.1060,  0.8440, -0.5556]],

        [[ 0.6334, -1.5088,  1.0840],
         [ 0.6334, -1.5088,  1.0840],
         [ 0.0000,  0.0000,  0.0000],
         [ 0.0000,  0.0000,  0.0000],
         [ 0.0000,  0.0000,  0.0000],
         [ 0.0000,  0.0000,  0.0000],
         [ 0.0000,  0.0000,  0.0000]],

        [[ 1.1060,  0.8440, -0.5556],
         [ 1.2046, -1.6742,  0.6964],
         [ 0.0000,  0.0000,  0.0000],
         [ 0.0000,  0.0000,  0.0000],
         [ 0.0000,  0.0000,  0.0000],
         [ 0.0000,  0.0000,  0.0000],
         [ 0.0000,  0.0000,  0.0000]]], grad_fn=<EmbeddingBackward>)

將填充好的tensor根據所輸入地參數壓縮成實際地數據,同時數據格式變爲PackedSequence。它有三個主要的參數, 分別是input, lengths, batch_first. 其中input就是我們填充過的數據, 而lengths就是數據的實際長度, batch_first就簡單了, 就是把數據的batch_first放到最前面。

Tips:但是爲啥我們需要使用pack_padded_sequence呢? 直接把填充好的數據輸入到RNN中不可以嗎?實際上是當然可以的, 但是在實際情況中, 數據是這樣輸入的, 下面給出一個batch的例子。

tensor([[1, 2, 3, 4, 5, 6, 7],

           [2, 3, 4, 5, 6, 7, 0]])

輸入到RNN的實際上是按照這樣的順序[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7], [7, 0]依次輸入到RNN中的. 但是我們發現最後一個是[7, 0], 這裏的0輸入到RNN中, 實際上並沒有輸出有用的數據, 這樣的話就會浪費算力資源, 所以我們使用pack_padded_sequence進行壓縮一下。

# Dim transformation: (batch_size, seq_len, embedding_dim) -> (batch_size, seq_len, nb_lstm_units)
# pack_padded_sequence so that padded items in the sequence won't be shown to the LSTM
X = torch.nn.utils.rnn.pack_padded_sequence(X, X_lengths, batch_first=True)
print('pack_padded',X)
pack_padded: PackedSequence(data=tensor([[ 1.3190,  0.0872, -0.2742],
        [ 0.6334, -1.5088,  1.0840],
        [ 1.1060,  0.8440, -0.5556],
        [-0.1677,  1.1510,  0.4656],
        [ 0.6334, -1.5088,  1.0840],
        [ 1.2046, -1.6742,  0.6964],
        [-0.8435, -0.5562,  0.9256],
        [-0.8396, -0.0076, -1.5482],
        [-0.5656, -1.3909, -0.7842],
        [-0.5416,  1.7457,  0.4726],
        [ 1.1060,  0.8440, -0.5556]], grad_fn=<PackPaddedSequenceBackward>), batch_sizes=tensor([3, 3, 1, 1, 1, 1, 1]), sorted_indices=None, unsorted_indices=None)

現在,運行LSTM。

X, hidden = lstm(X, (hidden_a, hidden_b))
print('lstm ouput shape in packed seq: ', X[0].size())
print(X)
lstm ouput shape in packed seq:  torch.Size([11, 10])
PackedSequence(data=tensor([[ 1.5307e-01,  6.7684e-02,  6.4468e-02, -2.2887e-01,  2.0291e-01,
          1.6192e-02,  9.1459e-03,  1.6604e-01,  2.4689e-01,  2.1277e-01],
        [-2.0542e-01, -6.3485e-02,  2.1305e-02, -1.8940e-01,  3.6822e-01,
         -4.2697e-04, -1.2188e-02,  1.4914e-01, -1.9662e-01,  4.1007e-03],
        [-1.8556e-01,  5.8267e-01, -5.5726e-02,  3.2447e-01, -5.6095e-02,
          1.0067e-01,  1.5416e-02, -6.1702e-01, -3.9697e-02,  3.5665e-03],
        [ 1.1510e-01, -3.5210e-02,  1.6324e-01, -1.1573e-01,  1.2481e-01,
         -1.3048e-01,  1.2843e-02,  2.3278e-03,  5.5453e-02,  1.4491e-01],
        [-1.8321e-01, -1.2067e-01,  5.8485e-02, -1.5943e-01,  2.3355e-01,
         -1.0162e-01, -2.4926e-02, -7.1134e-02, -1.7803e-01,  3.8604e-02],
        [-7.5403e-02,  1.5720e-01,  5.6410e-02,  4.4938e-02, -3.5079e-02,
         -1.0616e-02,  6.9829e-02, -3.6775e-01, -4.3459e-02,  7.8495e-02],
        [ 6.3267e-03, -1.0364e-01,  1.5995e-01, -1.1714e-01,  8.7210e-02,
         -1.7794e-01, -6.6597e-03, -9.0191e-02,  3.0669e-03,  1.1943e-01],
        [-4.8220e-02, -1.3315e-01,  1.5506e-01, -1.2272e-01,  7.2997e-02,
         -1.8515e-01, -3.2535e-02, -1.3140e-01, -2.9254e-02,  1.0733e-01],
        [-7.8605e-02, -1.4908e-01,  1.4581e-01, -1.3867e-01,  6.2283e-02,
         -1.9139e-01, -5.0125e-02, -1.5407e-01, -4.2093e-02,  9.7522e-02],
        [-9.3640e-02, -1.4058e-01,  1.4463e-01, -1.4144e-01,  7.2176e-02,
         -1.7028e-01, -4.0201e-02, -1.7073e-01, -6.1939e-02,  1.0614e-01],
        [-1.0190e-01, -1.3188e-01,  1.3591e-01, -1.2254e-01,  8.6571e-02,
         -1.6958e-01, -4.1146e-02, -1.7248e-01, -8.0702e-02,  9.7578e-02]],
       grad_fn=<CatBackward>), batch_sizes=tensor([3, 3, 1, 1, 1, 1, 1]), sorted_indices=None, unsorted_indices=None)

從上面輸出的結果中可以看出,由於lstm運行時沒有計算pading lstm的輸出是(1110),而常規的lstm的輸出應該是(3710),所以需要把原來的padding加上,還原成(3710)。pad_packed_sequence方法可以完成這個工作。

# undo the packing operation
X, _ = torch.nn.utils.rnn.pad_packed_sequence(X, batch_first=True)
print('un packed:', X.size())
print(X)
un packed: torch.Size([3, 7, 10])
tensor([[[ 1.5307e-01,  6.7684e-02,  6.4468e-02, -2.2887e-01,  2.0291e-01,
           1.6192e-02,  9.1459e-03,  1.6604e-01,  2.4689e-01,  2.1277e-01],
         [ 1.1510e-01, -3.5210e-02,  1.6324e-01, -1.1573e-01,  1.2481e-01,
          -1.3048e-01,  1.2843e-02,  2.3278e-03,  5.5453e-02,  1.4491e-01],
         [ 6.3267e-03, -1.0364e-01,  1.5995e-01, -1.1714e-01,  8.7210e-02,
          -1.7794e-01, -6.6597e-03, -9.0191e-02,  3.0669e-03,  1.1943e-01],
         [-4.8220e-02, -1.3315e-01,  1.5506e-01, -1.2272e-01,  7.2997e-02,
          -1.8515e-01, -3.2535e-02, -1.3140e-01, -2.9254e-02,  1.0733e-01],
         [-7.8605e-02, -1.4908e-01,  1.4581e-01, -1.3867e-01,  6.2283e-02,
          -1.9139e-01, -5.0125e-02, -1.5407e-01, -4.2093e-02,  9.7522e-02],
         [-9.3640e-02, -1.4058e-01,  1.4463e-01, -1.4144e-01,  7.2176e-02,
          -1.7028e-01, -4.0201e-02, -1.7073e-01, -6.1939e-02,  1.0614e-01],
         [-1.0190e-01, -1.3188e-01,  1.3591e-01, -1.2254e-01,  8.6571e-02,
          -1.6958e-01, -4.1146e-02, -1.7248e-01, -8.0702e-02,  9.7578e-02]],

        [[-2.0542e-01, -6.3485e-02,  2.1305e-02, -1.8940e-01,  3.6822e-01,
          -4.2697e-04, -1.2188e-02,  1.4914e-01, -1.9662e-01,  4.1007e-03],
         [-1.8321e-01, -1.2067e-01,  5.8485e-02, -1.5943e-01,  2.3355e-01,
          -1.0162e-01, -2.4926e-02, -7.1134e-02, -1.7803e-01,  3.8604e-02],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00]],

        [[-1.8556e-01,  5.8267e-01, -5.5726e-02,  3.2447e-01, -5.6095e-02,
           1.0067e-01,  1.5416e-02, -6.1702e-01, -3.9697e-02,  3.5665e-03],
         [-7.5403e-02,  1.5720e-01,  5.6410e-02,  4.4938e-02, -3.5079e-02,
          -1.0616e-02,  6.9829e-02, -3.6775e-01, -4.3459e-02,  7.8495e-02],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,
           0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00]]],
       grad_fn=<TransposeBackward0>)

運行線性連接層。

# 3. Project to tag space
# Dim transformation: (batch_size, seq_len, nb_lstm_units) -> (batch_size * seq_len, nb_lstm_units)

# this one is a bit tricky as well. First we need to reshape the data so it goes into the linear layer
X = X.contiguous()
X = X.view(-1, X.shape[2])

# run through actual linear layer
X = hidden_to_tag(X)
print(X)
tensor([[-0.2492, -0.0340,  0.2604, -0.1128,  0.1133],
        [-0.2084,  0.0336,  0.1851, -0.1483,  0.1951],
        [-0.1816,  0.0132,  0.1213, -0.1773,  0.2254],
        [-0.1661,  0.0026,  0.0929, -0.1934,  0.2455],
        [-0.1564, -0.0099,  0.0747, -0.2039,  0.2592],
        [-0.1581, -0.0089,  0.0738, -0.2018,  0.2586],
        [-0.1506, -0.0080,  0.0705, -0.1919,  0.2626],
        [-0.0720,  0.0107,  0.0915, -0.0448,  0.2584],
        [-0.0969, -0.0102,  0.0507, -0.1165,  0.2872],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813],
        [-0.2086, -0.2195,  0.1842, -0.1434,  0.1055],
        [-0.2085, -0.0694,  0.1326, -0.1655,  0.1689],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813],
        [-0.1556,  0.0033,  0.1379, -0.1076,  0.1813]],
       grad_fn=<AddmmBackward>)

最後,運行log_softmax函數,以便分類。

Tips:log_softmax函數等價於log(softmax(x)),對應的損失函數爲nn.NLLLoss。 nn.NLLLoss的輸入是一個對數概率向量和一個目標標籤。nn.NLLLoss的結果就是將輸出與Label對應的那個值拿出來,再去掉負號,然後求均值。(參考:Pytorch損失函數torch.nn.NLLLoss()詳解 https://blog.csdn.net/Jeremy_lf/article/details/102725285)。

# 4. Create softmax activations bc we're doing classification
# Dim transformation: (batch_size * seq_len, nb_lstm_units) -> (batch_size, seq_len, nb_tags)
X = F.log_softmax(X, dim=1) 
# I like to reshape for mental sanity so we're back to (batch_size, seq_len, nb_tags)
X = X.view(batch_size, seq_len, nb_tags)
Y_hat = X
print(Y_hat)
tensor([[[-1.8699, -1.6547, -1.3603, -1.7335, -1.5074],
         [-1.8429, -1.6009, -1.4494, -1.7828, -1.4395],
         [-1.8042, -1.6095, -1.5014, -1.7999, -1.3972],
         [-1.7854, -1.6166, -1.5264, -1.8126, -1.3738],
         [-1.7727, -1.6262, -1.5416, -1.8202, -1.3571],
         [-1.7743, -1.6251, -1.5424, -1.8180, -1.3576],
         [-1.7702, -1.6276, -1.5490, -1.8115, -1.3570]],

        [[-1.7375, -1.6548, -1.5739, -1.7103, -1.4071],
         [-1.7402, -1.6535, -1.5926, -1.7598, -1.3561],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487]],

        [[-1.7761, -1.7870, -1.3833, -1.7109, -1.4620],
         [-1.8014, -1.6624, -1.4603, -1.7584, -1.4240],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487],
         [-1.7856, -1.6267, -1.4921, -1.7376, -1.4487]]],
       grad_fn=<ViewBackward>)

技巧 3 : 屏蔽(Mask Out )我們並不想在損失函數中處理的網絡輸出

最終,我們準備要計算損失函數了。這裏的重點在於我們並不想讓用於填充的元素影響到最終的輸出。

Tips最好的方法是將所有的網絡輸出和標籤展平。然後計算其所在序列的損失值。

定義損失函數如下。

def loss(Y_hat, Y, X_lengths):
    # before we calculate the negative log likelihood, we need to mask out the activations
    # this means we don't want to take into account padded items in the output vector
    # simplest way to think about this is to flatten ALL sequences into a REALLY long sequence
    # and calculate the loss on that.

    # flatten all the labels
    Y = Y.view(-1)

    # flatten all predictions
    Y_hat = Y_hat.view(-1, nb_tags)

    # create a mask by filtering out all tokens that ARE NOT the padding token
    tag_pad_token = tags['<PAD>']
    mask = (Y > tag_pad_token).float()
    print('mask:', mask)

    # count how many tokens we have
    nb_tokens = int(torch.sum(mask).item()) #torch.sum(mask).data[0]
    print('tokens number:', nb_tokens)

    # pick the values for the label and zero out the rest with the mask
    _,ix = torch.topk(Y_hat,1,dim=1)
    print('Y_hat max', ix.view(-1))
    Y = Y – 1  # the code is none in original article, added by me.
    print('Y',Y)

    # calculate curracy
    count = 0
    for i in range(21):
        if (Y.numpy())[i] == (ix.view(-1).numpy())[i]:
            count += 1
    print('accuracy:', str(count / nb_tokens * 100) + '%')

    Y_hat = Y_hat[range(Y_hat.shape[0]), Y] * mask

    # compute cross entropy loss which ignores all <PAD> tokens
    ce_loss = -torch.sum(Y_hat) / nb_tokens

    return ce_loss

最後,計算損失。

Y = torch.tensor(padded_Y).long()
loss = loss(Y_hat, Y, X_lengths)
print('loss', loss)
mask: tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0.])
tokens number: 11
Y_hat max tensor([2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 4, 4, 4, 4, 4, 4])
Y tensor([ 0,  1,  2,  2,  2,  0,  3,  4,  4, -1, -1, -1, -1, -1,  3,  4, -1, -1, -1, -1, -1])
accuracy: 27.27272727272727%
loss tensor(1.5931, grad_fn=<DivBackward0>)

總結一下:

這便是在 PyTorch 中解決 LSTM 變長批輸入的最佳實踐。

1. 將序列從長到短進行排序

2. 通過序列填充使得輸入序列長度保持一致

3. 使用 pack_padded_sequence 確保 LSTM 不會額外處理序列中的填充項(Facebook 的 Pytorch 團隊真應該考慮爲這個繞口的 API 換個名字 !)

4. 使用 pad_packed_sequence 對步驟 3的操作進行還原

5. 將輸出和標記展平爲一個長的向量

6. 屏蔽(Mask Out) 你不想要的輸出

7. 計算其 Cross-Entropy (交叉熵)

上面給出的是jupyter notebook版本。如果想實現多次訓練,可以用.py版本(lstm_pad_pack.py)(https://download.csdn.net/download/longmaohu/12561087

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章