推薦評論展示指的是從衆多用戶評論中選出一個作爲店鋪的推薦理由,以希望更多的人點開這個店鋪。
這看着像是推薦系統,因爲要把合適的評論推薦給用戶看嘛,比如用戶A對環境要求高,如果推薦理由是“環境好”的話,A就會點進去,而用戶B更加關注好不好喫,只要好喫,環境好不好無所謂,那麼推薦理由是“某某食品好喫到爆”的話,B更有可能點進去。也就是說,同樣一家店鋪,不同人看到的推薦理由是不一樣的,不知道現在的美團、大衆點評等有木有這麼做~
言歸正傳,本次任務是一個典型的短文本(最長20個字)二分類問題,用預訓練的bert做。另外推薦一個論文How to Fine-Tune BERT for Text Classification?這篇論文中關於Bert做文本分類(長文本或者短文本)部分的方法,都是一些非常直觀的想法,我曾在一個長文本分類的任務中用過,親測有效~
目錄
一、題目描述
1.1 背景描述
本次推薦評論展示任務的目標是從真實的用戶評論中,挖掘合適作爲推薦理由的短句。點評軟件展示的推薦理由具有長度限制,而真實用戶評論語言通順、信息完整。綜合來說,兩者都具有用戶情感的正負向,但是展示推薦理由的內容相關性高於評論,需要較強的文本吸引力。一些真實的推薦理由如下圖所示:
1.2 數據集
訓練集:16000條,正負樣本比約爲1:2,示例如下
測試集:4189條,示例如下
數據集獲取鏈接數據下載地址
1.3 評測指標
AUC
二、解題思路
2.1 ML/DL的前提假設
不管是機器學習還是深度學習,都基於一個前提“訓練集和測試集獨立同分布”,只有滿足這個前提,模型的表現纔會好。這裏簡單的看一下文本的長度,如果訓練集都是短文本,測試集是長文本的話,想來模型不會表現太好~
train['length'] = train['content'].apply(lambda row:len(row))
test['length'] = test['content'].apply(lambda row:len(row))
可以看出,就文本長度而言,訓練集和測試集是同分布的,且label爲0和label爲1的長度差不太多,將文本長度作爲特徵對分類的作用不大。
2.2 主要思路
文本分類有很多種方法,有衆多的機器學習方法、fasttext、textcnn、基於RNN的……,但在bert面前,這些方法就如小巫見大巫,且bert天生就適合做分類任務。既然是刷分題,那我就不客氣了,bert走起~(此處必須感謝下實驗室的V100,伯禹也是提供了GPU環境的,不過還是線下用的爽一點~)
既然bert天生就適合做分類任務,那就把用bert好了,官方做法是取[CLS]對應的hidden經過一個全連接層來得到分類結果。這裏爲了充分利用這個時間步的信息,把bert最後一層取出來,然後進行一些簡單的操作,如下
Bert——>全局平均池化——>全局最大池化——>[CLS]與序列其他位置的注意力得分
Keras實現如下
from keras_bert import load_trained_model_from_checkpoint, Tokenizer
from keras_self_attention import SeqSelfAttention
def build_bert(nclass, selfloss, lr, is_train):
"""
nclass:output層的節點數
lr:學習率
selfloss:損失函數
is_train:是否微調bert
"""
bert_model = load_trained_model_from_checkpoint(config_path, checkpoint_path, seq_len=None)
for l in bert_model.layers:
l.trainable = is_train
x1_in = Input(shape=(None,))
x2_in = Input(shape=(None,))
x = bert_model([x1_in, x2_in])
x = Lambda(lambda x: x[:, :])(x)
avg_pool_3 = GlobalAveragePooling1D()(x)
max_pool_3 = GlobalMaxPooling1D()(x)
attention_3 = SeqSelfAttention(attention_activation='softmax')(x)
attention_3 = Lambda(lambda x: x[:, 0])(attention_3)
x = keras.layers.concatenate([avg_pool_3, max_pool_3, attention_3])
p = Dense(nclass, activation='sigmoid')(x)
model = Model([x1_in, x2_in], p)
model.compile(loss=selfloss,
optimizer=Adam(lr),
metrics=['acc'])
print(model.summary())
return model
其實我也是嘗試了一些複雜操作的(比如後面接一個CNN或者接一層GRU),也嘗試了把最後三層的特徵都取出來做一些操作,但效果沒有提升,但也不錯。
2.3 進一步的改進
訓練集中正負樣本比爲1:2,雖然算不上樣本不平衡,但也算不上平衡=-=一般損失函數是交叉熵,但交叉熵與AUC之間並不是嚴格單調的關係,交叉熵的下降並不一定能帶來AUC的提升,最好的方法是直接優化AUC,但AUC難以計算。
在樣本平衡的時候AUC、F1、準確率(accuary)可能是差不多的,但在不平衡的時候accuary是不可以用來做評價指標的,應該用F1或者AUC來做評價指標。仔細想想,AUC其實是與Precision和Recall有關的,F1也是和Precision和Recall有關的,那我們就直接來優化F1好了,但F1也不可導啊,有辦法,推薦蘇劍林大佬寫的函數光滑化雜談:不可導函數的可導逼近(看了這個要考考自己能不能寫出多分類的F1哦!哈哈)
直接用f1_loss做損失函數。
def f1_loss(y_true, y_pred):
# y_true:真實標籤0或者1
# y_pred:爲正類的概率
loss = 2 * tf.reduce_sum(y_true * y_pred) / tf.reduce_sum(y_true + y_pred) + K.epsilon()
return -loss
(代碼有點tf風格哈~實在是因爲自己tf寫的多一點,keras不常寫)
三、動手實踐
Step 1:batch=16,交叉熵損失函數,學習率1e-5,微調bert層,即
build_bert(1, 'binary_crossentropy', 1e-5, True)
(之前做過一次長文本分類,用的同款學習率~算是祖傳參數了吧emm)
Step 2:加載Step1得到的模型,固定bert層,只微調全連接層,batch依舊爲16,學習率取爲1e-7,即
build_bert(1, f1_loss, 1e-7, False)
(之前做的是長文本的三分類,也是樣本不平衡,也用了f1_loss微調,也有效果的提升~)
(不可以直接就用f1_loss做優化,會出問題的~如果想知道爲啥出問題,可以評論區留言=-=)
需要注意的是,模型參數>>數據量(16000),所以理論上一定會產生過擬合的,故採用early stopping來防止過擬合。
四、全部代碼
其實這裏有點模型集成的意思,用了五折交叉驗證,得到五個模型,五個模型對測試集的預測取均值得到最終的預測結果。如下圖
(盜圖~在大佬推薦評論展示大作業的圖上修改的=-=,水印也懶得去了,不講究了~)
(GPU上大概運行1小時,CPU也是可以跑的,可能得四五小時吧~)
import keras
from keras.utils import to_categorical
from keras.layers import *
from keras.callbacks import *
from keras.models import Model
import keras.backend as K
from keras.optimizers import Adam
import codecs
import gc
import numpy as np
import pandas as pd
import time
import os
from keras.utils.training_utils import multi_gpu_model
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
from sklearn.model_selection import KFold
from keras_bert import load_trained_model_from_checkpoint, Tokenizer
from keras_self_attention import SeqSelfAttention
from sklearn.metrics import roc_auc_score
# 線下0.9552568091358987 batch = 16 交叉熵 1e-5 線上 0.96668
# 線下0.9603767202619631 batch = 16 在上一步基礎上用f1loss 不調bert層 1e-7 線上0.97010
class OurTokenizer(Tokenizer):
def _tokenize(self, text):
R = []
for c in text:
if c in self._token_dict:
R.append(c)
elif self._is_space(c):
R.append('[unused1]') # space類用未經訓練的[unused1]表示
else:
R.append('[UNK]') # 剩餘的字符是[UNK]
return R
def f1_loss(y_true, y_pred):
# y_true:真實標籤0或者1
# y_pred:爲正類的概率
loss = 2 * tf.reduce_sum(y_true * y_pred) / tf.reduce_sum(y_true + y_pred) + K.epsilon()
return -loss
def seq_padding(X, padding=0):
L = [len(x) for x in X]
ML = max(L)
return np.array([
np.concatenate([x, [padding] * (ML - len(x))]) if len(x) < ML else x for x in X
])
class data_generator:
def __init__(self, data, batch_size=8, shuffle=True):
self.data = data
self.batch_size = batch_size
self.shuffle = shuffle
self.steps = len(self.data) // self.batch_size
if len(self.data) % self.batch_size != 0:
self.steps += 1
def __len__(self):
return self.steps
def __iter__(self):
while True:
idxs = list(range(len(self.data)))
if self.shuffle:
np.random.shuffle(idxs)
X1, X2, Y = [], [], []
for i in idxs:
d = self.data[i]
text = d[0][:maxlen]
# indices, segments = tokenizer.encode(first='unaffable', second='鋼', max_len=10)
x1, x2 = tokenizer.encode(first=text)
y = np.float32(d[1])
X1.append(x1)
X2.append(x2)
Y.append([y])
if len(X1) == self.batch_size or i == idxs[-1]:
X1 = seq_padding(X1)
X2 = seq_padding(X2)
Y = seq_padding(Y)
# print('Y', Y)
yield [X1, X2], Y[:, 0]
[X1, X2, Y] = [], [], []
def build_bert(nclass, selfloss, lr, is_train):
bert_model = load_trained_model_from_checkpoint(config_path, checkpoint_path, seq_len=None)
for l in bert_model.layers:
l.trainable = is_train
x1_in = Input(shape=(None,))
x2_in = Input(shape=(None,))
x = bert_model([x1_in, x2_in])
x = Lambda(lambda x: x[:, :])(x)
avg_pool_3 = GlobalAveragePooling1D()(x)
max_pool_3 = GlobalMaxPooling1D()(x)
# 官方文檔:https://www.cnpython.com/pypi/keras-self-attention
# 源碼 https://github.com/CyberZHG/keras-self-attention/blob/master/keras_self_attention/seq_self_attention.py
attention_3 = SeqSelfAttention(attention_activation='softmax')(x)
attention_3 = Lambda(lambda x: x[:, 0])(attention_3)
x = keras.layers.concatenate([avg_pool_3, max_pool_3, attention_3], name="fc")
p = Dense(nclass, activation='sigmoid')(x)
model = Model([x1_in, x2_in], p)
model.compile(loss=selfloss,
optimizer=Adam(lr),
metrics=['acc'])
print(model.summary())
return model
def run_cv(nfold, data, data_test):
kf = KFold(n_splits=nfold, shuffle=True, random_state=2020).split(data)
train_model_pred = np.zeros((len(data), 1))
test_model_pred = np.zeros((len(data_test), 1))
lr = 1e-7 # 1e-5
# categorical_crossentropy (可選方案:'binary_crossentropy', f1_loss)
selfloss = f1_loss
is_train = False # True False
for i, (train_fold, test_fold) in enumerate(kf):
print('***************%d-th****************' % i)
t = time.time()
X_train, X_valid, = data[train_fold, :], data[test_fold, :]
model = build_bert(1, selfloss, lr, is_train)
early_stopping = EarlyStopping(monitor='val_acc', patience=3)
plateau = ReduceLROnPlateau(monitor="val_acc", verbose=1, mode='max', factor=0.5, patience=2)
checkpoint = ModelCheckpoint('/home/comment_classify/expriments/' + str(i) + '_2.hdf5', monitor='val_acc',
verbose=2, save_best_only=True, mode='max', save_weights_only=False)
batch_size = 16
train_D = data_generator(X_train, batch_size=batch_size, shuffle=True)
valid_D = data_generator(X_valid, batch_size=batch_size, shuffle=False)
test_D = data_generator(data_test, batch_size=batch_size, shuffle=False)
model.load_weights('/home/comment_classify/expriments/' + str(i) + '.hdf5')
model.fit_generator(
train_D.__iter__(),
steps_per_epoch=len(train_D),
epochs=8,
validation_data=valid_D.__iter__(),
validation_steps=len(valid_D),
callbacks=[early_stopping, plateau, checkpoint],
)
# return model
train_model_pred[test_fold] = model.predict_generator(valid_D.__iter__(), steps=len(valid_D), verbose=1)
test_model_pred += model.predict_generator(test_D.__iter__(), steps=len(test_D), verbose=1)
del model
gc.collect()
K.clear_session()
print('time:', time.time()-t)
return train_model_pred, test_model_pred
if __name__ == '__main__':
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.8 # 定量
config.gpu_options.allow_growth = True # 按需
set_session(tf.Session(config=config))
t = time.time()
maxlen = 20 # 數據集中最大長度是19
# chinese_L-12_H-768_A-12是谷歌官方預訓練的中文bert,Chinese-Base版本
config_path = '/home/chinese_L-12_H-768_A-12/bert_config.json'
checkpoint_path = '/home/chinese_L-12_H-768_A-12/bert_model.ckpt'
dict_path = '/home/chinese_L-12_H-768_A-12/vocab.txt'
token_dict = {}
with codecs.open(dict_path, 'r', 'utf8') as reader:
for line in reader:
token = line.strip()
token_dict[token] = len(token_dict)
tokenizer = OurTokenizer(token_dict)
data_dir = '/home/comment_classify/'
train_df = pd.read_csv(os.path.join(data_dir, 'train.csv'))
test_df = pd.read_csv(os.path.join(data_dir, 'test.csv'))
print(len(train_df), len(test_df))
DATA_LIST = []
for data_row in train_df.iloc[:].itertuples():
DATA_LIST.append((data_row.content, data_row.label))
DATA_LIST = np.array(DATA_LIST)
DATA_LIST_TEST = []
for data_row in test_df.iloc[:].itertuples():
DATA_LIST_TEST.append((data_row.content, 0))
DATA_LIST_TEST = np.array(DATA_LIST_TEST)
n_cv = 5
train_model_pred, test_model_pred = run_cv(n_cv, DATA_LIST, DATA_LIST_TEST)
train_df['Prediction'] = train_model_pred
test_df['Prediction'] = test_model_pred/n_cv
train_df.to_csv(os.path.join(data_dir, 'train_submit2.csv'), index=False)
test_df['ID'] = test_df.index
test_df[['ID', 'Prediction']].to_csv(os.path.join(data_dir, 'submit2.csv'), index=False)
auc = roc_auc_score(np.array(train_df['label']), np.array(train_df['Prediction']))
print('auc', auc)
print('time is ', time.time()-t) # 2853s
以上就是本文的全部內容,若有不足之處,還請各位不吝指出。另外附上本次比賽官方要求交上去的文檔推薦評論展示文檔