STI比賽任務二:【答案檢驗基線方案以及思路分享】

完整代碼:https://aistudio.baidu.com/aistudio/projectdetail/5194830

子任務 2:答案檢驗

任務概述

子任務1涉及的答案抽取過程主要依賴答案片段與搜索query間語義相關性,卻無法保證答案片段本身的正確性與可靠性。因此,在答案抽取之後需要設計答案驗證方法,從抽取的多個答案片段中選擇出大衆認可度最高的高置信度答案進行最後的展示。給定一個搜索問題q和其對應的文檔集合D,子任務2希望將所有文檔基於其包含的答案觀點一致性進行聚類,得到每個query下包含用戶最公認答案的文檔集合,保證深度智能問答系統最終答案的可信度。

任務定義

給定搜索問題集Q,對於每個問題q搜索得到的網頁文檔集合Dq,任務要求參評系統將其中包含的答案聚類。聚類結果中屬於同一子集的答案應觀點一致且語義相似,屬於不同子集的答案應觀點不同或語義矛盾。對於每個集合,我們將其答案認可度定義爲包含的答案來自不同文檔的數量(避免一個答案在同一文檔中反覆提及),要求參賽系統返回認可度最高的答案集合所對應的文檔集合。

我們鼓勵參賽者將本任務轉化爲一個答案語義推理任務(也可自由選擇其他合理方案)。給定一個搜索query,對於兩個答案片段ai,aj,當ai和aj與query相關的答案語義相同,或存在蘊含關係,則稱ai和aj是答案語義一致的,應被劃分到同一答案集合;若ai和aj與query相關的答案語義無關,或存在矛盾關係,則稱ai和aj是答案語義不一致的,應被劃分到不同的答案集合。

數據集

本任務提供的數據將支持完成同一query下不同答案的語義推理任務。訓練集和驗證集都包含搜索問題集Q,網頁文檔集合D,答案集合A以及相同query下答案對之間的語義一致性關係(支持、中立、反對),詳細的數據格式可參考下文的數據樣例。爲在測試時模擬實際的深度智能問答場景,測試集不提供答案集合A,參賽者需要利用子任務1中設計的系統完成答案抽取工作後再進行語義一致性計算。最終參評系統應返回認可度最高的答案集合所對應的文檔集合。

訓練集對約20萬組答案對進行了語義一致性標註;驗證集包含約1萬組答案對一致性標註;訓練集和驗證集均可由答案一致性標註得到認可度最高的答案集合和對應文檔集合;測試集僅提供query和文檔集合。數據的主要特點爲:

文檔和答案長度普遍較長,存在大量混淆信息,語義計算困難
答案集合內部可能存在複雜的一致性關係

數據樣例

問題q:備孕偶爾喝冰的可以嗎

篇章d1:備孕能喫冷的食物嗎 炎熱的夏天讓很多人都覺得悶熱...,下面一起來看看吧! 備孕能喫冷的食物嗎 在中醫養生中,女性體質屬陰,不可以貪涼。吃了過多寒涼、生冷的食物後,會消耗陽氣,導致寒邪內生,侵害子宮。另外,宮寒是腎陽虛的表現,不會直接導致不孕。但宮寒會引起婦科疾病,所以也不可不防。因此處於備孕期的女性最好不要喫冷的食物。 備孕食譜有哪些 ...
答案a1:在中醫養生中,女性體質屬陰,不可以貪涼。吃了過多寒涼、生冷的食物後,會消耗陽氣,導致寒邪內生,侵害子宮。另外,宮寒是腎陽虛的表現,不會直接導致不孕。但宮寒會引起婦科疾病,所以也不可不防。因此處於備孕期的女性最好不要喫冷的食物。

篇章d2:病情分析:備孕通常不能喝冰飲料,避免影響胎兒健康。患者正處於備孕準備階段,男性和女性患者都需要注意飲食不要太辛辣和刺激,不推薦冷凍和冷飲。...
答案a2:備孕通常不能喝冰飲料,避免影響胎兒健康。

篇章d3:備孕期間能喝冰水?備孕期間能喝冰水嗎:這個應該不會有影響的 在線諮詢...
答案a3:這個應該不會有影響的

答案對<a1, a2>一致性:支持; 答案對<a1, a3>一致性:反對; 答案對<a2, a3>一致性:反對
答案聚類結果:{a1, a2},其認可度=2; {a3},其認可度=1; 認可度最高答案集合爲{a1, a2},所屬的文檔集合爲{d1, d2}。

數據說明

train/dev/test開頭的文件分別是訓練、開發、測試集數據

1、xxx_query_doc.json是提供的query和若干doc
每個query一行,使用json格式存儲,第一級包含query和docs兩個字段,docs爲列表,每一項是一個doc,包含title、url、doc_text、doc_id四個字段,doc_text是doc的正文,doc_id是doc的編號標識

2、xxx_answer_nli_data.tsv是答案關係標註數據
每一行包含六列,用製表符tab分割,分別是query、url1、answer1、url2、answer2、label
label爲1表示兩個答案爲相互支持關係、語義一致,爲0表示兩個答案未中立或反對關係
該數據可供訓練模型,判斷答案關係,用於支持最終任務
測試集不包含該部分數據

3、xxx_label.tsv是答案檢驗任務所對應標註
每個query一行,每行包含兩列,用製表符tab分割,第一列爲query,第二列爲認可度最高答案所屬的文檔集合,使用英文逗號連接的doc_id
測試集不包含該部分數據

數據加載與分析

train_query_doc=pd.read_json('data_task2/train_query_doc.json',lines=True)

答案檢驗任務所對應標註數據

# 案檢驗任務所對應標註
train_label=pd.read_table('data_task2/train_label.tsv')
train_label.columns=['query','doc_ids']
train_label.shape

思路1:基於無監督算法SinglePass對相似文檔聚類

對query檢索的文檔直接進行聚類,選取相似性比較高的文檔

import numpy as np
from gensim import corpora, models, matutils


class SingelPassClusterTfidf():
    '''
        1.利用tfidf vec計算cossim
    '''
    def tfidf_vec(self, corpus, pivot=10, slope=0.25):
        dictionary = corpora.Dictionary(corpus)  # 形成詞典映射
        self.dict_size = len(dictionary)
        # print('dictionary size:{}'.format(len(dictionary)))
        corpus = [dictionary.doc2bow(text) for text in corpus]  # 詞的向量表示
        tfidf = models.TfidfModel(corpus, pivot=pivot, slope=slope)
        corpus_tfidf = tfidf[corpus]
        return corpus_tfidf

    def get_max_similarity(self, cluster_cores, vector):
        max_value = 0
        max_index = -1
        for k, core in cluster_cores.items():
            similarity = matutils.cossim(vector, core)
            if similarity > max_value:
                max_value = similarity
                max_index = k
        return max_index, max_value

    def single_pass(self, corpus_vec, corpus, theta):
        clusters = {}
        cluster_cores = {}
        cluster_text = {}
        num_topic = 0
        cnt = 0
        for vector, text in zip(corpus_vec, corpus):
            if num_topic == 0:
                clusters.setdefault(num_topic, []).append(vector)
                cluster_cores[num_topic] = vector
                cluster_text.setdefault(num_topic, []).append(text)
                num_topic += 1
            else:
                max_index, max_value = self.get_max_similarity(cluster_cores, vector)
                if max_value > theta:
                    clusters[max_index].append(vector)
                    text_matrix = matutils.corpus2dense(clusters[max_index], num_terms=self.dict_size,
                                                        num_docs=len(clusters[max_index])).T  # 稀疏轉稠密
                    core = np.mean(text_matrix, axis=0)  # 更新簇中心
                    core = matutils.any2sparse(core)  # 將稠密向量core轉爲稀疏向量
                    cluster_cores[max_index] = core
                    cluster_text[max_index].append(text)
                else:  # 創建一個新簇
                    clusters.setdefault(num_topic, []).append(vector)
                    cluster_cores[num_topic] = vector
                    cluster_text.setdefault(num_topic, []).append(text)
                    num_topic += 1
            cnt += 1
            if cnt % 100 == 0:
                print('processing {}...'.format(cnt))
        return clusters, cluster_text

    def fit_transform(self, corpus, raw_data, theta=0.6):
        tfidf_vec = self.tfidf_vec(corpus)  # tfidf_vec是稀疏向量
        clusters, cluster_text = self.single_pass(tfidf_vec, raw_data, theta)
        return clusters, cluster_text
    

class ClusterTfidf:
    def __init__(self):
        self.clustor = SingelPassClusterTfidf()
        return

    """聚類主函數"""
    def cluster(self, corpus, text2index, theta=0.6):
        clusters, cluster_text = self.clustor.fit_transform(corpus, text2index, theta)
        return clusters, cluster_text
import jieba
import json
import collections

handler_tfidf = ClusterTfidf()

class SinglePassCluster(object):
    """初始化"""
    def __init__(self):
        pass

    """讀取文件數據"""
    def load_data(self, filepath):
        datas = []
        with open(filepath, 'r', encoding='utf-8') as f:
            for line in f:
                line = line.strip()
                if not line:
                    continue
                datas.append(line)
        return datas

    """加載文檔,並進行轉換"""
    def load_docs(self, docs):
        corpus = [list(jieba.cut(s['title']+s['doc_text'])) for s in docs]
        doc_ids = [s['doc_id'] for s in docs]
        
        index2corpus = dict()
        
        for index, line in zip(doc_ids,docs):
            index2corpus[index] = line
        text2index = list(index2corpus.keys())
        # print('docs total size:{}'.format(len(text2index)))
        return text2index, index2corpus, corpus

    """保存聚類結果"""
    def save_cluster(self, method, index2corpus, cluster_text, cluster_path):
        clusterTopic_list = sorted(cluster_text.items(), key=lambda x: len(x[1]), reverse=True)
        # print(clusterTopic_list)
        with open(cluster_path + '/cluster_%s.json' % method, 'w+', encoding='utf-8') as save_obj:
            for k in clusterTopic_list:
                data = dict()
                data["cluster_id"] = k[0]
                data["cluster_nums"] = len(k[1])
                data["cluster_docs"] = [{"doc_id": index, "doc_content": index2corpus.get(value)} for index, value in
                                        enumerate(k[1], start=1)]
                json_obj = json.dumps(data, ensure_ascii=False)
                save_obj.write(json_obj)
                save_obj.write('\n')

    """聚類運行主控函數"""
    def cluster(self, docs,method="doc2vec", theta=0.6):
        # docs = self.load_data(self.train_corpus_filepath)
        text2index, index2corpus, corpus = self.load_docs(docs)
        # print("loaded %s samples...." % len(docs))
        if method == "tfidf":
            clusters, cluster_text = handler_tfidf.cluster(corpus, text2index, theta)
            # self.save_cluster(method, index2corpus, cluster_text, cluster_path)
            return clusters, cluster_text
        else:
            clusters, cluster_text = handler_docvec.cluster(corpus, text2index, theta)
            return clusters, cluster_text
            # self.save_cluster(method, index2corpus, cluster_text, cluster_path)
        return

預測結果提交

submit_file=open('subtask2_test_pred.txt','w',encoding='utf-8')
for idx,row in test_query_doc.iterrows():
    method = "tfidf"
    theta = 0.4
    handler = SinglePassCluster()
    clusters, cluster_text=handler.cluster(row['docs'],method=method, theta=theta)
    # 第一個query    d1,d2,d4,d8
    # 第二個query    d0,d1,d3
    similar_docs=[]
    for key,value in cluster_text.items():
        if len(value)>1:
            similar_docs.extend(value)
    submit_file.write(row['query']+'\t'+','.join(similar_docs)+'\n')
submit_file.close()

思路2:基於任務1抽取答案進行語義推理

思路二主要是:我們先通過答案關係標註數據(xxxx_nli)數據,訓練出一個答案語義一致性推斷模型,然後利用任務一堆docs裏面的query和doc進行答案抽取,最後判斷具有答案的文檔的答案之間的相似性,將相似性大於一定值的文檔放在一塊即可

由於時間問題,目前訓練比較慢,後續有分數繼續更新提交部分代碼

答案語義推理模型

直接當做二分類任務,用於訓練集nli數據量比較大,我們可以進行採樣進行訓練

構建訓練集

 
def concat_text(row):
    return str(row['answer1']) + '[SEP]' + row['answer2']
 # print(weight)
train = pd.read_table('data_task2/train_answer_nli_data.tsv',header=None)
train.columns=['query','url1','answer1','url2','answer2','label']

train.fillna('', inplace=True)
train['text'] = train.apply(lambda row: concat_text(row), axis=1)

模型訓練

import os
import random
from functools import partial
from sklearn.utils.class_weight import compute_class_weight
 
import numpy as np
import paddle
import paddle as P
import paddle.nn.functional as F
import paddlenlp as ppnlp #===抱抱臉的transformers
import pandas as pd
from paddle.io import Dataset
from paddlenlp.data import Stack, Tuple, Pad
from paddlenlp.datasets import MapDataset
from paddlenlp.transformers import LinearDecayWithWarmup
from sklearn.model_selection import StratifiedKFold
from tqdm import tqdm
import numpy as np
import paddle.fluid as fluid
import paddle.nn as nn
 
 
# =============================== 初始化 ========================
class Config:
    text_col = 'text'
    target_col = 'label'
    # 最大長度大小
    max_len = 256 # len(text) or toeknizer:256覆蓋95%
    # 模型運行批處理大小
    batch_size = 32
    target_size = 2
    seed = 71
    n_fold = 5
    # 訓練過程中的最大學習率
    learning_rate = 5e-5
    # 訓練輪次
    epochs = 3  # 3
    # 學習率預熱比例
    warmup_proportion = 0.1
    # 權重衰減係數,類似模型正則項策略,避免模型過擬合
    weight_decay = 0.01
    model_name = "ernie-gram-zh"
    print_freq = 100
 
 
def seed_torch(seed=42):
    random.seed(seed)
    os.environ['PYTHONHASHSEED'] = str(seed)
    np.random.seed(seed)
 

 
CFG = Config()
seed_torch(seed=CFG.seed)
 
 
# y = train[CFG.target_col]
# class_weight = 'balanced'
# classes = train[CFG.target_col].unique()  # 標籤類別
# weight = compute_class_weight(class_weight=class_weight,classes= classes, y=y)
 
 

 
# CV split:5折 StratifiedKFold 分層採樣
folds = train.copy()
Fold = StratifiedKFold(n_splits=CFG.n_fold, shuffle=True, random_state=CFG.seed)
for n, (train_index, val_index) in enumerate(Fold.split(folds, folds[CFG.target_col])):
    folds.loc[val_index, 'fold'] = int(n)
folds['fold'] = folds['fold'].astype(int)
 
 
# ====================================== 數據集以及轉換函數==============================
# Torch 
class CustomDataset(Dataset):
    def __init__(self, df):
        self.data = df.values.tolist()
        self.texts = df[CFG.text_col]
        self.labels = df[CFG.target_col]
 
    def __len__(self):
        return len(self.texts)
 
    def __getitem__(self, idx):
        """
        索引數據
        :param idx:
        :return:
        """
        text = str(self.texts[idx])
        label = self.labels[idx]
        example = {'text': text, 'label': label}
 
        return example
 
 
def convert_example(example, tokenizer, max_seq_length=512, is_test=False):
    """
    創建Bert輸入
    ::
        0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
        | first sequence    | second sequence |
    Returns:
        input_ids(obj:`list[int]`): The list of token ids.
        token_type_ids(obj: `list[int]`): List of sequence pair mask.
        label(obj:`numpy.array`, data type of int64, optional): The input label if not is_test.
    """
    encoded_inputs = tokenizer(text=example["text"], max_seq_len=max_seq_length)
    input_ids = encoded_inputs["input_ids"]
    token_type_ids = encoded_inputs["token_type_ids"]
 
    if not is_test:
        label = np.array([example["label"]], dtype="int64")
        return input_ids, token_type_ids, label
    else:
        return input_ids, token_type_ids
 
 
def create_dataloader(dataset,
                      mode='train',
                      batch_size=1,
                      batchify_fn=None,
                      trans_fn=None):
    if trans_fn:
        dataset = dataset.map(trans_fn)
 
    shuffle = True if mode == 'train' else False
    if mode == 'train':
        batch_sampler = paddle.io.DistributedBatchSampler(
            dataset, batch_size=batch_size, shuffle=shuffle)
    else:
        batch_sampler = paddle.io.BatchSampler(
            dataset, batch_size=batch_size, shuffle=shuffle)
 
    return paddle.io.DataLoader(
        dataset=dataset,
        batch_sampler=batch_sampler,
        collate_fn=batchify_fn,
        return_list=True)
 
 
# tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained(CFG.model_name)
tokenizer = ppnlp.transformers.ErnieGramTokenizer.from_pretrained(CFG.model_name)
 
trans_func = partial(
    convert_example,
    tokenizer=tokenizer,
    max_seq_length=CFG.max_len)
batchify_fn = lambda samples, fn=Tuple(
    Pad(axis=0, pad_val=tokenizer.pad_token_id),  # input
    Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # segment
    Stack(dtype="int64")  # label
): [data for data in fn(samples)]
 

 
# ====================================== 訓練、驗證與預測函數 ==============================
 
@paddle.no_grad()
def evaluate(model, criterion, metric, data_loader):
    """
    驗證函數
    """
    model.eval()
    metric.reset()
    losses = []
    for batch in data_loader:
        input_ids, token_type_ids, labels = batch
        logits = model(input_ids, token_type_ids)
        loss = criterion(logits, labels)
        losses.append(loss.numpy())
        correct = metric.compute(logits, labels)
        metric.update(correct)
        accu = metric.accumulate()
    print("eval loss: %.5f, accu: %.5f" % (np.mean(losses), accu))
    model.train()
    metric.reset()
    return accu
 

 
def train():
    # ====================================  交叉驗證訓練 ==========================
    for fold in range(5):
        print(f"===============training fold_nth:{fold + 1}======================")
        trn_idx = folds[folds['fold'] != fold].index
        val_idx = folds[folds['fold'] == fold].index
 
        train_folds = folds.loc[trn_idx].reset_index(drop=True)
        valid_folds = folds.loc[val_idx].reset_index(drop=True)
 
        train_dataset = CustomDataset(train_folds)
        train_ds = MapDataset(train_dataset)
 
        dev_dataset = CustomDataset(valid_folds)
        dev_ds = MapDataset(dev_dataset)
 
        train_data_loader = create_dataloader(
            train_ds,
            mode='train',
            batch_size=CFG.batch_size,
            batchify_fn=batchify_fn,
            trans_fn=trans_func)
        dev_data_loader = create_dataloader(
            dev_ds,
            mode='dev',
            batch_size=CFG.batch_size,
            batchify_fn=batchify_fn,
            trans_fn=trans_func)
 
        model = ppnlp.transformers.ErnieGramForSequenceClassification.from_pretrained(CFG.model_name,
                                                                                      num_classes=25)
 
        num_training_steps = len(train_data_loader) * CFG.epochs
        lr_scheduler = LinearDecayWithWarmup(CFG.learning_rate, num_training_steps, CFG.warmup_proportion)
        optimizer = paddle.optimizer.AdamW(
            learning_rate=lr_scheduler,
            parameters=model.parameters(),
            weight_decay=CFG.weight_decay,
            apply_decay_param_fun=lambda x: x in [
                p.name for n, p in model.named_parameters()
                if not any(nd in n for nd in ["bias", "norm"])
            ])
 
        criterion = paddle.nn.loss.CrossEntropyLoss()
        metric = paddle.metric.Accuracy()
 
        global_step = 0
        best_val_acc = 0
        for epoch in range(1, CFG.epochs + 1):
            for step, batch in enumerate(train_data_loader, start=1):
                input_ids, segment_ids, labels = batch
                logits = model(input_ids, segment_ids)
                # probs_ = paddle.to_tensor(logits, dtype="float64")
                loss = criterion(logits, labels)
                probs = F.softmax(logits, axis=1)
                correct = metric.compute(probs, labels)
                metric.update(correct)
                acc = metric.accumulate()
 
                global_step += 1
                if global_step % CFG.print_freq == 0:
                    print("global step %d, epoch: %d, batch: %d, loss: %.5f, acc: %.5f" % (
                        global_step, epoch, step, loss, acc))
                loss.backward()
                optimizer.step()
                lr_scheduler.step()
                optimizer.clear_grad()
            acc = evaluate(model, criterion, metric, dev_data_loader)
            if acc > best_val_acc:
                best_val_acc = acc
                P.save(model.state_dict(), f'{CFG.model_name}_fold{fold}.bin')
            print('Best Val acc %.5f' % best_val_acc)
        del model
        if fold>0:
            break# 訓練一折
 
if __name__ == '__main__':
    train()
 

預測結果提交

model = ppnlp.transformers.ErnieGramForSequenceClassification.from_pretrained(CFG.model_name,num_classes=2)
model.load_dict(P.load('ernie-gram-zh_fold0.bin'))


submit_file = open('subtask2_test_pred.txt', 'w', encoding='utf-8')
querys = qa_task2['query'].unique()
for query in querys:
    # print(query)
    group = qa_task2[qa_task2['query'] == query]
    group=group[group['answer']!='NoAnswer'].reset_index(drop=True)
    group = group.sort_values(by=['query', 'score'], ascending=False).reset_index(drop=True)
    group['doc_id'] = group['doc_id'].apply(lambda x: x.split('_')[-1])
    # print(group)
    
    similar_docs = []
    # 添加第一個文檔,作爲基準答案
    top_text = group['answer'][0]
    similar_docs.append(group['doc_id'][0])

    texts = [{'text':top_text+'[SEP]'+text} for text in group['answer'][1:]]
    candidate_docs = [ doc_id for doc_id in group['doc_id'][1:]]
    
    pred = predict(model,texts, tokenizer, 16)
    preds=list(pred[:,1])
    print(len(texts),len(pred),len(candidate_docs))
    # print(pred)
    for doc_id,prob in zip(candidate_docs,preds):
        # print(prob)
        if prob >0.2:
            similar_docs.append(doc_id)
    submit_file.write(query+'\t'+','.join(similar_docs)+'\n')
    del group,query
    # break
submit_file.close()

    



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章