自然語言處理(NLP): 12 BERT文本分類

BERT介紹

BERT 論文閱讀

來自論文《https://arxiv.org/pdf/1810.04805.pdf

BERT說:“我要用 transformer 的 encoders”

Ernie不屑道:“呵呵,你不能像Bi-Lstm一樣考慮文章”

BERT自信回答道:“我們會用masks”

解釋一下Mask:

語言模型會根據前面單詞來預測下一個單詞,但是self-attention的注意力只會放在自己身上,那麼這樣100%預測到自己,毫無意義,所以用Mask,把需要預測的詞給擋住。

如下圖:

圖片

BERT 數據數據表示如圖所示:

圖片

BERT的論文爲我們介紹了幾種BERT可以處理的NLP任務:

  1. 短文本相似
  2. 文本分類
  3. QA機器人
  4. 語義標註

圖片

BERT用做特徵提取

微調方法並不是使用BERT的唯一方法,就像ELMo一樣,你可以使用預選訓練好的BERT來創建語境化詞嵌入。然後你可以將這些嵌入提供給現有的模型。

圖片

哪個向量最適合作爲上下文嵌入? 我認爲這取決於任務。 本文考察了六種選擇(與微調模型相比,得分爲96.4):

圖片

  • Feature Extraction:特徵提取
  • Finetune:微調

BERT 源碼分析

BERT是Transformer的一部分. Transformer本質是filter size爲1的CNN. BERT比較適合幾百個單詞的情況,文章太長不太行。從github 上下載transformers 實現bert 預訓練模型處理和文本分類的案例。

https://github.com/huggingface/transformers

重點關注幾個重要的文件

  • rule_gule.py 是一個文本分類實現的案例
  • optimization.py 提供了AdamW 梯度更新算法
  • modeling_bert.py 是HuggingFace提供的一個基於PyTorch實現的BERT 模型

pytorch_model.bin : 預訓練的模型

vocab.txt :詞典文件

config.json: bert 配置文件,主要bert 的定義的參數

英文預訓練模型:

https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin

https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt

https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json

中文預訓練模型:

https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-pytorch_model.bin

https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt

https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json

  • tokenization_bert.py 加載vocab.txt 詞典文件,提供數據預處理方法,默認加載vocab.txt 文件名,可以詳細查看代碼
  • 從本地加載詞典和模型方法

我們首先把相關文件放到 bert_pretrain 目錄下

bert_pretrain$ tree -a

├── config.json

├── pytorch_model.bin

└── vocab.txt

from transformers import BertTokenizer, BertForSequenceClassification
bert_path = './bert_pretrain/'
tokenizer = BertTokenizer.from_pretrained( os.path.join(bert_path, 'vocab.txt') )
model = BertForSequenceClassification.from_pretrained(
os.path.join(bert_path, 'pytorch_model.bin'),
config=os.path.join(bert_path, 'config.json'))
  • 查看BERT倉庫核心代碼列表
可以查看BERT的PyTorch實現 (https://github.com/huggingface/transformers)。 
modeling: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py
BertEmbedding: wordpiece embedding + position embedding + token type embedding
BertSelfAttnetion: query, key, value的變換
BertSelfOutput: 
BertIntermediate
BertOutput
BertForSequenceClassification
configuration: https://github.com/huggingface/transformers/blob/master/transformers/configuration_bert.py
tokenization: https://github.com/huggingface/transformers/blob/master/transformers/tokenization_bert.py
DataProcessor: https://github.com/huggingface/transformers/blob/master/transformers/data/processors/glue.py#L194

BERT升級版

RoBERTa:更強大的BERT

論文地址:https://arxiv.org/pdf/1907.11692.pdf

  • 加大訓練數據 16GB -> 160GB,更大的batch size,訓練時間加長
  • 不需要NSP Loss
  • 使用更長的訓練 Sequence
  • Static vs. Dynamic Masking
  • 模型訓練成本在6萬美金以上(估算)

ALBERT:參數更少的BERT

論文地址:https://arxiv.org/pdf/1909.11942.pdf

  • 一個輕量級的BERT模型
  • 核心思想:
    • 共享層與層之間的參數 (減少模型參數)
    • 增加單層向量維度

DistilBERT:輕量版BERT

https://arxiv.org/pdf/1910.01108.pdf

  • MLM, NSP
  • MLM: cross entropy loss: -\sum_{i=1}^k p_i log (q_i) = - log (q_{label})
  • teacher (MLM) = distribution
  • student: 學習distribution: -\sum_{i=1}^k p_teacher_i log (q_student_i)

Patient Distillation

https://arxiv.org/abs/1908.09355

圖片

電影評論情感分析

我們通過斯坦福大學電影評論數據集進行情感分析,驗證BERT 在情感分析上效果。

代碼實現

import argparse
import logging
import os
import random
import time

import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import TensorDataset, DataLoader

from transformers import BertTokenizer, BertForSequenceClassification, AdamW
from transformers.data.processors.glue import glue_convert_examples_to_features as convert_examples_to_features
from transformers.data.processors.utils import DataProcessor, InputExample

logger = logging.getLogger(__name__)

# 初始化參數
parser = argparse.ArgumentParser()
args = parser.parse_args(args=[])  # 在jupyter notebook中,args不爲空
args.data_dir = "./data/"
args.model_type = "bert"
args.task_name = "sst-2"
args.output_dir = "./outputs2/"
args.max_seq_length = 128
args.do_train = True
args.do_eval = True
args.warmup_steps = 0
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
args.device = device
args.seed = 1234
args.batch_size = 48
args.n_epochs=3
args.lr = 5e-5

print('args: ', args)


def set_seed(args):
    random.seed(args.seed)
    np.random.seed(args.seed)
    torch.manual_seed(args.seed)
    if torch.cuda.is_available() > 0:
        torch.cuda.manual_seed_all(args.seed)


set_seed(args)  # Added here for reproductibility


class Sst2Processor(DataProcessor):
    """Processor for the SST-2 data set (GLUE version)."""

    def get_example_from_tensor_dict(self, tensor_dict):
        """See base class."""
        return InputExample(
            tensor_dict["idx"].numpy(),
            tensor_dict["sentence"].numpy().decode("utf-8"),
            None,
            str(tensor_dict["label"].numpy()),
        )

    def get_train_examples(self, data_dir):
        """See base class."""
        return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")

    def get_dev_examples(self, data_dir):
        """See base class."""
        return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")

    def get_test_examples(self, data_dir):
        """See base class."""
        return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")

    def get_labels(self):
        """See base class."""
        return ["0", "1"]

    def _create_examples(self, lines, set_type):
        """Creates examples for the training and dev/test sets."""
        examples = []
        for (i, line) in enumerate(lines):
            if i == 0:
                continue
            guid = "%s-%s" % (set_type, i)
            text_a = line[0]
            label = line[1]
            examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
        return examples


def load_and_cache_examples(args, processor, tokenizer, set_type):
    # Load data features from cache or dataset file
    print("Creating features from dataset file at {}".format(args.data_dir))
    label_list = processor.get_labels()

    if set_type == 'train':
        examples = (
            processor.get_train_examples(args.data_dir)
        )
    if set_type == 'dev':
        examples = (
            processor.get_dev_examples(args.data_dir)
        )
    if set_type == 'test':
        examples = (
            processor.get_test_examples(args.data_dir)
        )

    features = convert_examples_to_features(
        examples,  # 原始數據
        tokenizer,  #
        label_list=label_list,
        max_length=args.max_seq_length,  # 設置每個batch 最大句子長度
        output_mode='classification',  # 設置分類標記
        pad_on_left=False,  # 右側進行padding
        pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
        pad_token_segment_id=0,  # bert 分類設置0
        mask_padding_with_zero=True  # the attention mask will be filled by ``1``
    )

    # Convert to Tensors and build dataset
    all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
    all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
    all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
    all_labels = torch.tensor([f.label for f in features], dtype=torch.long)

    dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels)
    return dataset


def epoch_time(start_time, end_time):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs


def train(model, data_loader, criterion, optimizer):
    epoch_acc = 0.
    epoch_loss = 0.
    total_batch = 0
    model.train()

    for batch in data_loader:
        #
        batch = tuple(t.to(device) for t in batch)
        input_ids, attention_mask, token_type_ids, labels = batch
        # 預測
        outputs = model(input_ids=input_ids,
                        attention_mask=attention_mask,
                        token_type_ids=token_type_ids)[0]
        # 計算loss和acc
        loss = criterion(outputs, labels)
        _, y = torch.max(outputs, dim=1)
        acc = (y == labels).float().mean()

        #
        if total_batch % 100==0:
            print('Iter_batch[{}/{}]:'.format(total_batch, len(data_loader)),
                  'Train Loss: ', "%.3f" % loss.item(), 'Train Acc:', "%.3f" % acc.item())

        # 計算批次下總的acc和loss
        epoch_acc += acc.item()  # 當前批次準確率
        epoch_loss += loss.item()  # 當前批次loss

        # 剃度下降
        loss.backward()
        torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
        optimizer.step()
        #scheduler.step()  # Update learning rate schedule
        model.zero_grad()

        total_batch += 1
        # break
    return epoch_acc / len(data_loader), epoch_loss / len(data_loader)


def evaluate(model, data_loader, criterion):
    epoch_acc = 0.
    epoch_loss = 0.

    model.eval()
    with torch.no_grad():
        for batch in data_loader:
            #
            batch = tuple(t.to(device) for t in batch)
            input_ids, attention_mask, token_type_ids, labels = batch
            # 預測
            outputs = model(input_ids=input_ids,
                            attention_mask=attention_mask,
                            token_type_ids=token_type_ids)[0]
            # 計算loss和acc
            loss = criterion(outputs, labels)
            _, y = torch.max(outputs, dim=1)
            acc = (y == labels).float().mean()

            # 計算批次下總的acc和loss
            epoch_acc += acc.item()  # 當前批次準確率
            epoch_loss += loss.item()  # 當前批次loss
    return epoch_acc / len(data_loader), epoch_loss / len(data_loader)


def main():
    # 1. 定義數據處理器
    processor = Sst2Processor()
    label_list = processor.get_labels()
    num_labels = len(label_list)
    print('label_list: ', label_list)
    print('num_label: ', num_labels)
    print('*' * 60)
    # 2. 加載數據集
    train_examples = processor.get_train_examples(args.data_dir)
    dev_examples = processor.get_dev_examples(args.data_dir)  # 每條記錄封裝InputExample 類實例對象
    test_examples = processor.get_test_examples(args.data_dir)  # 每條記錄封裝InputExample 類實例對象

    print('訓練集記錄數:', len(train_examples))
    print('驗證集數據記錄數:', len(dev_examples))
    print('測試數據記錄數:', len(test_examples))
    print('訓練數據數據舉例:\n', train_examples[0])
    print('驗證集數據數據舉例:\n', dev_examples[0])
    print('測試集數據數據舉例:\n', test_examples[0])

    # 3.加載本地 詞表 模型
    bert_path = './bert_pretrain/'
    tokenizer = BertTokenizer.from_pretrained( os.path.join(bert_path, 'vocab.txt') )
    train_dataset = load_and_cache_examples(args, processor, tokenizer, 'train')
    dev_dataset = load_and_cache_examples(args, processor, tokenizer, 'dev')
    test_dataset = load_and_cache_examples(args, processor, tokenizer, 'test')
    train_dataloader = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True)
    dev_dataloader = DataLoader(dev_dataset, batch_size=args.batch_size, shuffle=False)
    test_dataloader = DataLoader(test_dataset, batch_size=args.batch_size, shuffle=False)

    # 4. 模型定義
    model = BertForSequenceClassification.from_pretrained(
        os.path.join(bert_path, 'pytorch_model.bin'),
        config=os.path.join(bert_path, 'config.json'))
    model.to(device)

    N_EPOCHS = args.n_epochs
    # 梯度更新算法AdamW
    t_total = len(train_dataloader) // N_EPOCHS
    no_decay = ["bias", "LayerNorm.weight"]
    optimizer_grouped_parameters = [
        {
            "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
            "weight_decay": 0.0,
        },
        {"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0},
    ]
    optimizer = AdamW(optimizer_grouped_parameters, lr=args.lr,
                      correct_bias=False)  
    # loss_func
    criterion = nn.CrossEntropyLoss()

    # 5. 模型訓練
    print('模型訓練開始: ')
    logger.info("***** Running training *****")
    logger.info("  train num examples = %d", len(train_dataloader))
    logger.info("  dev num examples = %d", len(dev_dataloader))
    logger.info("  test num examples = %d", len(test_dataloader))
    logger.info("  Num Epochs = %d", args.n_epochs)

    best_valid_loss = float('inf')
    for epoch in range(N_EPOCHS):

        start_time = time.time()

        train_acc, train_loss = train(model, train_dataloader, criterion, optimizer)
        val_acc, val_loss = evaluate(model, dev_dataloader, criterion)

        end_time = time.time()

        epoch_mins, epoch_secs = epoch_time(start_time, end_time)

        if val_loss < best_valid_loss:
            print('loss increasing->')
            best_valid_loss = val_loss
            torch.save(model.state_dict(), 'bert-model.pt')

        print(f'Epoch: {epoch + 1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
        print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc * 100:.3f}%')
        print(f'\t Val. Loss: {val_loss:.3f} |  Val. Acc: {val_acc * 100:.3f}%')

    # evaluate
    model.load_state_dict(torch.load('bert-model.pt'))
    test_acc, test_loss = evaluate(model, test_dataloader, criterion)
    print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc * 100:.2f}%')


if __name__ == '__main__':
    main()

訓練過程

python3 main.py

loss increasing->
Epoch: 01 | Epoch Time: 13m 38s
        Train Loss: 0.293 | Train Acc: 88.791%
         Val. Loss: 0.307 |  Val. Acc: 88.600%
		 
Epoch: 02 | Epoch Time: 13m 40s
        Train Loss: 0.160 | Train Acc: 94.905%
         Val. Loss: 0.336 |  Val. Acc: 88.664%
		 
Epoch: 03 | Epoch Time: 13m 40s
        Train Loss: 0.127 | Train Acc: 96.367%
         Val. Loss: 0.471 |  Val. Acc: 88.935%
Test Loss: 0.289 | Test Acc: 89.57%

新聞文本分類

我們使用新聞的title 進行文本的分類,屬於中文短文本分類業務,這裏使用BERT提供的預訓練中文模型進行,通過HuggingFace提供transformers 模型實現文本分類,最終的效果非常的不錯。

BERT預訓練的中文模型和詞典

https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-pytorch_model.bin

https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt

https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json

  • 下載完成config.json 後,我們分別重命名文件

pytorch_model.bin : 預訓練的模型

vocab.txt :詞典文件

config.json: bert 配置文件,主要bert 的定義的參數

  • 修改config.json ,設置分類個數,我們這裏設置num_labels = 10 表示10個類別分類
{
  "architectures": [
    "BertForSequenceClassification"
  ],
  "attention_probs_dropout_prob": 0.1,
  "directionality": "bidi",
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "max_position_embeddings": 512,
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "pooler_fc_size": 768,
  "pooler_num_attention_heads": 12,
  "pooler_num_fc_layers": 3,
  "pooler_size_per_head": 128,
  "pooler_type": "first_token_transform",
  "type_vocab_size": 2,
  "vocab_size": 21128,
  "num_labels": 10
}
  • 加載詞典和模型方法
# 詞典和BERT 預訓練模型目錄
bert_path = './bert_pretrain/'

# 加載詞典: 默認當前目錄下vocab.txt 文件
tokenizer = BertTokenizer.from_pretrained(bert_path)

# 初始化BERT 預訓練的模型
model = BertForSequenceClassification.from_pretrained(
        os.path.join(bert_path, 'pytorch_model.bin'),# 模型路徑
        config=os.path.join(bert_path, 'config.json')# BERT 配置文件
)

代碼實現

  • 自定義數據處理器NewsProcessor ,繼承DataProcessor

參考bert 代碼中rule_bert.py 實現

class NewsProcessor(DataProcessor):
    """Processor for the News-2 data set (GLUE version)."""

    def get_train_examples(self, data_dir):
        """See base class."""
        return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")

    def get_dev_examples(self, data_dir):
        """See base class."""
        return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")

    def get_test_examples(self, data_dir):
        """See base class."""
        return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")

    def get_labels(self):
        """See base class."""
        return ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]

    def _create_examples(self, lines, set_type):
        """Creates examples for the training and dev/test sets."""
        examples = []
        for (i, line) in enumerate(lines):
            guid = "%s-%s" % (set_type, i)
            text_a = line[0]
            label = line[1]
            examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
        return examples





def load_and_cache_examples(args, processor, tokenizer, set_type):
    # Load data features from cache or dataset file
    print("Creating features from dataset file at {}".format(args.data_dir))

    if set_type == 'train':
        examples = (
            processor.get_train_examples(args.data_dir)
        )
    if set_type == 'dev':
        examples = (
            processor.get_dev_examples(args.data_dir)
        )
    if set_type == 'test':
        examples = (
            processor.get_test_examples(args.data_dir)
        )

    features = convert_examples_to_features(
        processor,
        examples,  # 原始數據
        tokenizer,  #
        max_length=args.max_seq_length,  # 設置每個batch 最大句子長度
        pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
        pad_token_segment_id=0,  # bert 分類設置0
        mask_padding_with_zero=True  # the attention mask will be filled by ``1``
    )

    # Convert to Tensors and build dataset
    all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
    all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
    all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
    all_labels = torch.tensor([f.label for f in features], dtype=torch.long)

    dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels)
    return dataset
  • 定義訓練預測方法

def train(model, data_loader, criterion, optimizer):
    epoch_acc = 0.
    epoch_loss = 0.
    total_batch = 0
    model.train()


    for batch in data_loader:
        #
        batch = tuple(t.to(device) for t in batch)
        input_ids, attention_mask, token_type_ids, labels = batch
        # 預測
        outputs = model(input_ids=input_ids,
                        attention_mask=attention_mask,
                        token_type_ids=token_type_ids)[0]
        # 計算loss和acc
        loss = criterion(outputs, labels)
        _, y = torch.max(outputs, dim=1)
        acc = (y == labels).float().mean()


        #
        if total_batch % 100 == 0:
            print('Iter_batch[{}/{}]:'.format(total_batch, len(data_loader)),
                  'Train Loss: ', "%.3f" % loss.item(), 'Train Acc:', "%.3f" % acc.item())


        # 計算批次下總的acc和loss
        epoch_acc += acc.item()  # 當前批次準確率
        epoch_loss += loss.item()  # 當前批次loss


        # 剃度下降
        loss.backward()
        torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
        optimizer.step()
        # scheduler.step()  # Update learning rate schedule
        model.zero_grad()


        total_batch += 1
        # break
    return epoch_acc / len(data_loader), epoch_loss / len(data_loader)




def evaluate(model, data_loader, criterion):
    epoch_acc = 0.
    epoch_loss = 0.


    model.eval()
    with torch.no_grad():
        for batch in data_loader:
            #
            batch = tuple(t.to(device) for t in batch)
            input_ids, attention_mask, token_type_ids, labels = batch
            # 預測
            outputs = model(input_ids=input_ids,
                            attention_mask=attention_mask,
                            token_type_ids=token_type_ids)[0]
            # 計算loss和acc
            loss = criterion(outputs, labels)
            _, y = torch.max(outputs, dim=1)
            acc = (y == labels).float().mean()


            # 計算批次下總的acc和loss
            epoch_acc += acc.item()  # 當前批次準確率
            epoch_loss += loss.item()  # 當前批次loss
    return epoch_acc / len(data_loader), epoch_loss / len(data_loader)

核心代碼定義後,我們就可以定義optimzier ,加載數據進行模型訓練了,這裏不在詳述。

下面我們直接看下訓練過程以及最終效果如何。

訓練過程

python3 main.py

這裏針對短文本分類,一般在24 個字左右,這裏我們迭代3輪,我們發現效果還是非常不錯的。

訓練數據數據舉例:
InputExample(guid='train-0', text_a='中華女子學院:本科層次僅1專業招男生', text_b=None, label='3')
驗證集數據數據舉例:
InputExample(guid='dev-0', text_a='體驗2D巔峯 倚天屠龍記十大創新概覽', text_b=None, label='8')
測試集數據數據舉例:
InputExample(guid='test-0', text_a='詞彙閱讀是關鍵 08年考研暑期英語複習全指南', text_b=None, label='3')

loss increasing->
Epoch: 01 | Epoch Time: 7m 0s
        Train Loss: 0.288 | Train Acc: 91.121%
         Val. Loss: 0.206 |  Val. Acc: 93.097%
loss increasing->
Epoch: 02 | Epoch Time: 7m 4s
        Train Loss: 0.153 | Train Acc: 95.030%
         Val. Loss: 0.200 |  Val. Acc: 93.612%
Epoch: 03 | Epoch Time: 7m 5s
        Train Loss: 0.109 | Train Acc: 96.385%
         Val. Loss: 0.205 |  Val. Acc: 93.661%
Test Loss: 0.201 | Test Acc: 93.72%

在線服務預測

  • 加載模型以及字典
  • 定義bert預測方法
with open('data/class.txt') as f:
    items =  [item.strip() for item in f.readlines()]

def predict(model,text):
    tokens_pt2 = tokenizer.encode_plus(text, return_tensors="pt")
    print(type(tokens_pt2))
    input_ids = tokens_pt2['input_ids']
    attention_mask = tokens_pt2['attention_mask']
    token_type_ids = tokens_pt2['token_type_ids']

    with torch.no_grad():
        outputs = model(input_ids=input_ids,
                                attention_mask=attention_mask,
                                token_type_ids=token_type_ids)[0]


        _,y=torch.max(outputs,dim=1)
    
    return y.item()
  • 在線服務測試驗證
text = "安徽2009年成人高考成績查詢系統"
y_pred = predict(model,text)
print('*'*60)
print('在線服務預測:')
print("{}->{}->{}".format(  text,y_pred,items[y_pred]))

輸出結果
安徽2009年成人高考成績查詢系統->3->教育

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章