手把手教你做簡單的CNN文本分類——基於pytorch

CNN是在圖像處理領域大放異彩的網絡模型,但其實在NLP領域CNN同樣有許多應用。最近發現,在長文本上CNN提取特徵的效果確實不錯,在文本分類這種簡單的任務上,並不需要複雜且無法並行的RNN,CNN就能搞定了。(當然,其實沒必要用到複雜的神經網絡,簡單的機器學習模型+傳統的特徵,也能取得不錯的效果,而且速度還更快)。針對文本分類,CNN在長文本上的效果很好,而且模型也很簡單,這是我想寫這篇blog的初衷。

既然是手把手教,那麼洛基必須要在座的各位看完本文都能做出一個CNN文本分類器來!代碼裏面有超詳細的註釋,絕對是嬰兒級教學。

首先是深度學習環境,pytorch0.4 + python 3.6,還有其他一些包比如numpy,sklearn這些機器學習常用的包,大家可以自行pip下載,安裝很方便。分詞工具採用的是pkuseg,如果沒有這個包,可以用jieba分詞代替,不過這個包分詞效果據說比jieba好,感興趣的小夥伴可以pip install pkuseg 安裝一下。

數據集是CNews的長文本分類數據,包含50000個訓練樣本,5000個驗證樣本,10000個測試樣本,中文的新聞語料,這些新聞共分爲體育、時尚、遊戲等10個類別。因爲是長文本,所以文本預處理的長度閾值設置爲了256個token。

小夥伴們可以自行準備訓練數據,將數據分爲train.txt, dev.txt, test.txt三個文件,這三個文件每一行都是 [原始文本+ '\t' + 類別索引] 的格式。接着再準備一個class.txt文件,每一行是一個類別名,注意第一行對應的類別索引=0,第二行對應的類別索引=1,依此類推,之所以需要這個class.txt是爲了後續的測試結果可視化。

模型的輸入採用預訓練的詞向量。因爲是做中文數據集,所以採用了維基百科中文預訓練的詞向量 sgns.wiki.word 。該詞向量可以到網上下載。在訓練時,需要把train.txt, dev.txt, test.txt, class.txt這四個文件以及詞向量sgns.wiki.word文件放到以下文件目錄:(注意,py文件與CNews文件夾在同一個目錄下)

接下來是CNN_Classifier.py的代碼,內附超詳細註釋:

# coding: UTF-8
import os
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import pickle as pkl

from sklearn import metrics
from importlib import import_module
from tensorboardX import SummaryWriter
from tqdm import tqdm 
from datetime import timedelta

import pkuseg # 分詞工具包,如果沒有,可以用jieba分詞,或者用其他分詞工具


#################################################################################
############################## 定義各種參數、路徑 #################################

class Config(object):
    def __init__(self, dataset_dir, embedding):
        self.model_name = 'TextCNN'
        self.train_path = dataset_dir + '/data/train.txt'                                # 訓練集
        self.dev_path = dataset_dir + '/data/dev.txt'                                    # 驗證集
        self.test_path = dataset_dir + '/data/test.txt'                                  # 測試集
        self.class_list = [x.strip() for x in open(
            dataset_dir + '/data/class.txt').readlines()]                                # 類別名單
        self.vocab_path = dataset_dir + '/data/vocab.pkl'                                # 詞表
        self.save_path = dataset_dir + self.model_name + '.ckpt'        # 模型訓練結果
        self.log_path = dataset_dir + '/log/' + self.model_name
        self.embedding_pretrained = torch.tensor(
            np.load(dataset_dir + '/data/' + embedding)["embeddings"].astype('float32'))\
            if embedding != 'random' else None                                       # 預訓練詞向量
        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')   # 設備

        self.dropout = 0.4                                              
        self.require_improvement = 1000                                 # 若超過1000batch效果還沒提升,則提前結束訓練
        self.num_classes = len(self.class_list)                         
        self.n_vocab = 0                                                # 詞表大小,在運行時賦值
        self.num_epochs = 20                                            
        self.batch_size = 32                                           
        self.pad_size = 256                                             # 每句話處理成的長度,截長、補短
        self.learning_rate = 1e-5                                       
        self.embed_dim = self.embedding_pretrained.size(1)           # 字向量維度
        self.filter_sizes = (2, 3, 4)                                   # 卷積核尺寸
        self.num_filters = 256                                          # 卷積核數量(channels數)


#################################################################################
################################### 定義模型結構 #################################

class Model(nn.Module):
    def __init__(self, config):
        super(Model, self).__init__()
        if config.embedding_pretrained is not None:
            self.embedding = nn.Embedding.from_pretrained(config.embedding_pretrained, freeze=False)
        else:
            self.embedding = nn.Embedding(config.n_vocab, config.embed_dim, padding_idx=config.n_vocab - 1)
        # 三個卷積層分別是(1, channels=256, kernal_size=(2, 300))
        #                (1, 256, (3, 300))    (1, 256, (4, 300))
        # 這三個卷積層是並行的,同時提取2-gram、3-gram、4-gram特徵
        self.convs = nn.ModuleList(
            [nn.Conv2d(1, config.num_filters, (k, config.embed_dim)) for k in config.filter_sizes])
        self.dropout = nn.Dropout(config.dropout)
        self.fc = nn.Linear(config.num_filters * len(config.filter_sizes), config.num_classes)

    # 假設embed_dim=300,每個卷積層的卷積核都有256個(會將一個輸入seq映射到256個channel上)
    # 三個卷積層分別爲:(1, 256, (2, 300)), (1, 256, (3, 300)), (1, 256, (4, 300))
    # x(b_size, 1, seq_len, 300)進入卷積層後得到 (b, 256, seq_len-1, 1), (b, 256, seq_len-2, 1), (b, 256, seq_len-3, 1)
    # 卷積之後經過一個relu,然後把最後一個維度上的1去掉(squeeze),得到x(b, 256, seq_len-1), 接着進入池化層
    # 一個池化層輸出一個(b, 256),三個池化層輸出三個(b, 256), 然後在forward裏面把三個結果concat起來
    def conv_and_pool(self, x, conv):
        x = F.relu(conv(x)).squeeze(3)
        # max_pool1d表示一維池化,一維的意思是,輸入x的維度除了b_size和channel,只有一維,即x(b_size, channel, d1),故池化層只需要定義一個寬度表示kernel_size
        # max_pool2d表示二維池化,x(b_size, channel, d1, d2), 所以max_pool2d定義的kernel_size是二維的
        # max_pool1d((b, 256, seq_len-1), kernel_size = seq_len-1) -> (b, 256, 1)
        # squeeze(2) 之後得到 (b, 256)
        x = F.max_pool1d(x, x.size(2)).squeeze(2) 
        return x

    """
    nn中的成員比如nn.Conv2d,都是類,可以提取待學習的參數。當我們在定義網絡層的時候,層內如果有需要學習的參數,那麼我們就要用nn組件;
    nn.functional裏的成員都是函數,只是完成一些功能,比如池化,整流線性函數,不保存參數,所以如果某一層只是單純完成一些簡單的功能,沒有
    待學習的參數,那麼就用nn.funcional裏的組件
    """

    # 後續數據預處理時候,x被處理成是一個tuple,其形狀是: (data, length).  其中data(b_size, seq_len),  length(batch_size)
    # x[0]:(b_size, seq_len)
    def forward(self, x):
        out = self.embedding(x[0]) # x[0]:(b_size, seq_len, embed_dim)    x[1]是一維的tensor,表示batch_size個元素的長度
        out = out.unsqueeze(1) # (b_size, 1, seq_len, embed_dim)
        out = torch.cat([self.conv_and_pool(out, conv) for conv in self.convs], 1) # (b, channel * 3) == (b, 256 * 3)
        out = self.dropout(out)
        out = self.fc(out) # out(b, num_classes)
        return out


# 澤維爾正態分佈 xavier_normal_:均值爲0,標準差爲根號(2/(輸入+輸出數))的正態分佈,默認gain=1
# kaiming正態分佈 kaiming_normal_:均值爲0,標準差爲根號(2/(1+a²)f_in)的正態分佈,默認a=0
# 初始化時候要避開預訓練詞向量
def init_network(model, method='xavier', exclude='embedding', seed=123):
    for name, w in model.named_parameters():
        if exclude not in name: # 對於embedding,保留預訓練的embedding
            if 'weight' in name:
                if method == 'xavier':
                    nn.init.xavier_normal_(w)
                elif method == 'kaiming':
                    nn.init.kaiming_normal_(w)
                else:
                    nn.init.normal_(w)
            elif 'bias' in name:
                nn.init.constant_(w, 0)
            else:
                pass


#################################################################################
################################### 訓練、測試過程 ################################

def train(config, model, train_iter, dev_iter, test_iter):
    start_time = time.time()
    model.train()
    optimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate)

    # 學習率指數衰減,每個epoch:學習率 = gamma * 學習率
    # 配合 scheduler.step() 完成學習率衰減
    scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9)
    total_batch = 0  # 記錄進行到多少batch
    dev_best_loss = float('inf')
    last_improve = 0  # 記錄上次驗證集loss下降的batch數
    flag = False  # 記錄是否很久沒有效果提升
    # from tensorboardX import SummaryWriter  記錄訓練的日誌
    writer = SummaryWriter(log_dir=config.log_path + '/' + time.strftime('%m-%d_%H.%M', time.localtime()))
    for epoch in range(config.num_epochs):
        print('Epoch [{}/{}]'.format(epoch + 1, config.num_epochs))
        scheduler.step() # 學習率衰減
        for i, (trains, labels) in enumerate(train_iter): # 每個(train_iter)相當於 -> ((x[b, len], len[b]), labels[b])
            outputs = model(trains) # trains[0]:(b_size, seq_len)保存idx的二維tensor,  trains[1]:(b_size)表示長度的一維tensor
            # 1.清空梯度 -> 2.計算loss -> 3.反向傳播 -> 4.梯度更新
            model.zero_grad()
            loss = F.cross_entropy(outputs, labels) # outputs(b, num_classes), labels(b)
            loss.backward()
            optimizer.step()
            if total_batch % 100 == 0:
                # 每多少輪輸出在訓練集和驗證集上的效果
                true = labels.data.cpu()
                predic = torch.max(outputs.data, 1)[1].cpu()
                train_acc = metrics.accuracy_score(true, predic) # sklearn.metrics.accuracy_score(true, predic) 返回正確的比例
                dev_acc, dev_loss = evaluate(config, model, dev_iter)
                if dev_loss < dev_best_loss:
                    dev_best_loss = dev_loss
                    torch.save(model.state_dict(), config.save_path)
                    improve = '*'
                    last_improve = total_batch
                else:
                    improve = ''
                time_dif = get_time_dif(start_time)
                msg = 'Iter: {0:>6},  Train Loss: {1:>5.2},  Train Acc: {2:>6.2%},  Val Loss: {3:>5.2},  Val Acc: {4:>6.2%},  Time: {5} {6}'
                print(msg.format(total_batch, loss.item(), train_acc, dev_loss, dev_acc, time_dif, improve))
                writer.add_scalar("loss/train", loss.item(), total_batch)
                writer.add_scalar("loss/dev", dev_loss, total_batch)
                writer.add_scalar("acc/train", train_acc, total_batch)
                writer.add_scalar("acc/dev", dev_acc, total_batch)
                model.train() # 因爲調用evaluate時,evaluate會調用model.eval(),從而使得dropout失效
            total_batch += 1
            if total_batch - last_improve > config.require_improvement:
                # 驗證集loss超過1000batch沒下降,結束訓練
                print("No optimization for a long time, auto-stopping...")
                flag = True
                break
        if flag:
            break
    writer.close()
    test(config, model, test_iter)


def test(config, model, test_iter):
    # test
    model.load_state_dict(torch.load(config.save_path))
    model.eval()
    start_time = time.time()
    test_acc, test_loss, test_report, test_confusion = evaluate(config, model, test_iter, test=True)
    msg = 'Test Loss: {0:>5.2},  Test Acc: {1:>6.2%}'
    print(msg.format(test_loss, test_acc))
    print("Precision, Recall and F1-Score...")
    print(test_report)
    print("Confusion Matrix...")
    print(test_confusion)
    time_dif = get_time_dif(start_time)
    print("Time usage:", time_dif)


def evaluate(config, model, data_iter, test=False):
    model.eval() # 關閉dropout
    loss_total = 0
    predict_all = np.array([], dtype=int)
    labels_all = np.array([], dtype=int)
    with torch.no_grad(): # 將outputs從計算圖中排除
        for texts, labels in data_iter:
            outputs = model(texts) 
            loss = F.cross_entropy(outputs, labels)
            loss_total += loss
            labels = labels.data.cpu().numpy()
            predic = torch.max(outputs.data, 1)[1].cpu().numpy()
            # append拼接數組和數值,也可以拼接兩個數組,數組必須是np.array()類型,拼接成一維的np.array()
            labels_all = np.append(labels_all, labels) 
            predict_all = np.append(predict_all, predic)

    acc = metrics.accuracy_score(labels_all, predict_all)
    if test:
        # classification_report用於顯示每個class上的各項指標結果,包括precision, recall, f1-score
        report = metrics.classification_report(labels_all, predict_all, target_names=config.class_list, digits=4)
        # 混淆矩陣
        confusion = metrics.confusion_matrix(labels_all, predict_all)
        return acc, loss_total / len(data_iter), report, confusion
    return acc, loss_total / len(data_iter)


#######################################################################################
################################## 數據預處理過程 ######################################

seg = pkuseg.pkuseg() # 分詞工具,通過 seg.cut(x) 進行分詞。 可以換成 jieba 分詞

MAX_VOC_SIZE = 500000  # 詞表長度限制
UNK, PAD = '<UNK>', '<PAD>'  # 未知字,padding符號


def build_vocab(file_path, tokenizer, max_size, min_freq):
    vocab_dic = {}
    with open(file_path, 'r', encoding='UTF-8') as f:
        for line in tqdm(f):
            lin = line.strip()
            if not lin:
                continue
            content = lin.split('\t')[0]
            for word in tokenizer(content):
                vocab_dic[word] = vocab_dic.get(word, 0) + 1
        # vocab_list = sorted([_ for _ in vocab_dic.items() if _[1] >= min_freq], key=lambda x: x[1], reverse=True)[:max_size]
        vocab_list = sorted([_ for _ in vocab_dic.items() if _[1] >= min_freq], key=lambda x: x[1], reverse=True)
        vocab_dic = {word_count[0]: idx for idx, word_count in enumerate(vocab_list)}
        vocab_dic.update({UNK: len(vocab_dic), PAD: len(vocab_dic) + 1})
    return vocab_dic


def build_dataset(config, ues_word):
    if ues_word:
        # tokenizer = lambda x: x.split(' ')  # 以空格隔開,word-level
        tokenizer = lambda x: seg.cut(x) # 分詞
    else:
        tokenizer = lambda x: [y for y in x]  # char-level
    if os.path.exists(config.vocab_path):
        vocab = pkl.load(open(config.vocab_path, 'rb'))
    else:
        vocab = build_vocab(config.train_path, tokenizer=tokenizer, max_size=MAX_VOC_SIZE, min_freq=1)
        pkl.dump(vocab, open(config.vocab_path, 'wb'))
    print(f"Vocab size: {len(vocab)}")

    def load_dataset(path, pad_size=32):
        contents = []
        with open(path, 'r', encoding='UTF-8') as f:
            for line in tqdm(f):
                lin = line.strip()
                if not lin:
                    continue
                content, label = lin.split('\t')
                words_line = []
                token = tokenizer(content)
                seq_len = len(token)
                if pad_size:
                    if len(token) < pad_size:
                        token.extend([vocab.get(PAD)] * (pad_size - len(token)))
                    else:
                        token = token[:pad_size]
                        seq_len = pad_size
                # word to id
                for word in token:
                    words_line.append(vocab.get(word, vocab.get(UNK)))
                contents.append((words_line, int(label), seq_len))
        return contents  # [([...], 0, len), ([...], 1, len), ...]
    train = load_dataset(config.train_path, config.pad_size)
    dev = load_dataset(config.dev_path, config.pad_size)
    test = load_dataset(config.test_path, config.pad_size)
    return vocab, train, dev, test


class DatasetIterater(object):
    """__init__
        batches: [([...], 0, len), ([...], 1, len), ...],  每個元素是(seq_idx_list[], label, len),是全部樣本(未分批)
        batch_size: config.batch_size
        device: config.device
    """
    def __init__(self, batches, batch_size, device):
        self.batch_size = batch_size
        self.batches = batches
        self.n_batches = len(batches) // batch_size
        self.residue = False  # False表示沒有餘數,代表n_batch數量是整數
        if len(batches) % self.n_batches != 0:
            self.residue = True
        self.index = 0
        self.device = device

    """input
        datas是一個batch_size的樣本,格式相當於batches,每個元素都是(seq_idx_list[], label, len)
    output
        (x[b_size, max_len], len[b_size]), label[b_size]
        x是padding後的idx張量,
        len是這個batch裏每個樣本padding之前的長度,超過pad_size的長度定爲pad_size
        label是每個樣本真實的標籤
    """
    def _to_tensor(self, datas):
        # x:(b_size, seq_len) -> [[...], [...], ...] 一個batch_size,代表seq
        x = torch.LongTensor([_[0] for _ in datas]).to(self.device)
        # y:(b_size) -> [0, 1, 5, ...] 一個batch_size,代表標籤
        y = torch.LongTensor([_[1] for _ in datas]).to(self.device)

        # pad前的長度(超過pad_size的設爲pad_size)   seq_len:(b_size) -> [len1, len2, ...]
        seq_len = torch.LongTensor([_[2] for _ in datas]).to(self.device)
        return (x, seq_len), y # (x[b, l], l[b]), y[b]

    # 當被作爲迭代器訪問時(比如for循環時),__next__方法給出了一定的規則來依次返回類的成員
    def __next__(self):
        # 假設len(self.batches)==25, self.n_batches==4, self.batch_size==6
        if self.residue and self.index == self.n_batches:
            batches = self.batches[self.index * self.batch_size: len(self.batches)] # 最後餘下的不足batch_size數量的樣本
            self.index += 1
            batches = self._to_tensor(batches)
            return batches
        elif not self.residue and self.index == self.n_batches: # 這一句待定
            self.index = 0
            raise StopIteration
        elif self.index > self.n_batches: # 指示batch的索引歸零,當前epoch結束
            self.index = 0
            raise StopIteration # 迭代停止的異常
        else:
            batches = self.batches[self.index * self.batch_size: (self.index + 1) * self.batch_size]
            self.index += 1
            batches = self._to_tensor(batches)
            return batches

    def __iter__(self): # 使DatasetIterater的實例變成可迭代對象。
        # __iter__需要返回一個迭代器。由於類裏同時定義了__next__方法,__iter__返回實例本身就可以
        return self

    def __len__(self):
        if self.residue:
            return self.n_batches + 1
        else:
            return self.n_batches


def build_iterator(dataset, config):
    iter = DatasetIterater(dataset, config.batch_size, config.device)
    return iter


def get_time_dif(start_time):
    # 獲取已使用時間
    end_time = time.time()
    time_dif = end_time - start_time
    # timedelta返回時間間隔
    return timedelta(seconds=int(round(time_dif)))

# 下面的目錄、文件名按需更改。
train_dir = "./CNews/data/train.txt"
vocab_dir = "./CNews/data/vocab.pkl"
pretrain_dir = "./CNews/data/sgns.wiki.word"
emb_dim = 300
filename_trimmed_dir = "./CNews/data/myEmbedding"
if os.path.exists(vocab_dir):
    word_to_id = pkl.load(open(vocab_dir, 'rb'))
else:
    # tokenizer = lambda x: [y for y in x]  # 以字爲單位構建詞表
    tokenizer = lambda x: seg.cut(x) # 以詞爲單位構建詞表
    word_to_id = build_vocab(train_dir, tokenizer=tokenizer, max_size=MAX_VOC_SIZE, min_freq=1)
    pkl.dump(word_to_id, open(vocab_dir, 'wb'))

embeddings = np.random.rand(len(word_to_id), emb_dim)
f = open(pretrain_dir, "r", encoding='UTF-8')
for i, line in enumerate(f.readlines()):
    # if i == 0:  # 若第一行是標題,則跳過
    #     continue
    lin = line.strip().split(" ")
    if lin[0] in word_to_id:
        idx = word_to_id[lin[0]]
        emb = [float(x) for x in lin[1:301]]
        embeddings[idx] = np.asarray(emb, dtype='float32')
f.close()
np.savez_compressed(filename_trimmed_dir, embeddings=embeddings)

#######################################################################################
################################## 加載數據,開始訓練 ###################################

# dataset文件夾名稱
dataset_dir = 'CNews'
# 詞向量名稱
embedding_file = 'myEmbedding.npz'

config = Config(dataset_dir, embedding_file)
# 中文的話使用字符級別的詞向量,字符串裏每個字符就是一個字
use_word = True # True for word, False for char


np.random.seed(1)
torch.manual_seed(1)
torch.cuda.manual_seed_all(1)
torch.backends.cudnn.deterministic = True  # 保證每次結果一樣

start_time = time.time()

print("Loading data...")
# train -> # [([...], 0), ([...], 1), ...]
vocab, train_data, dev_data, test_data = build_dataset(config, use_word)
train_iter = build_iterator(train_data, config)
dev_iter = build_iterator(dev_data, config)
test_iter = build_iterator(test_data, config)
time_dif = get_time_dif(start_time)
print("Time usage:", time_dif)

# train
config.n_vocab = len(vocab) # 詞表大小

model_name = config.model_name
model = Model(config).to(config.device)
init_network(model) # 初始化隱層參數
print(model.parameters)
train(config, model, train_iter, dev_iter, test_iter)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章