論文地址: https://arxiv.org/pdf/1910.11476.pdf
1.論文目的
該論文主要解決 ""嵌套型"的NER(Nested NER) 的問題.往常的工作中主要是針對非嵌套型”的NER(Flat NER),但是在當遇到嵌套型NER就會有問題.
2.論文tricks
如下圖所示的兩個例子所示
論文創造性的引入 MRC(Machine Reading Comprehensio) 的思想來解決嵌套型NER的問題.
如上圖所示:我們可以對某些實體引入相對應的問題,使模型更明白我們要提取什麼實體.這些問題是人爲設計的.
(1)模型輸入
那麼具體的模型輸入以bert爲例子:
[CLS]question[SEQ]text[SEQ]
(2)loss function
模型的loss function:
- 關於實體start index的loss function
sequence_output, pooled_output, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
sequence_heatmap = sequence_output # batch x seq_len x hidden
start_logits = self.start_outputs(sequence_heatmap) # batch x seq_len x 2
start_loss = loss_fct(start_logits.view(-1, 2), start_positions.view(-1))
sequence_output, pooled_output, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
sequence_heatmap = sequence_output # batch x seq_len x hidden
end_logits = self.end_outputs(sequence_heatmap) # batch x seq_len x 2
end_loss = loss_fct(end_logits.view(-1, 2), end_positions.view(-1))
sequence_output, pooled_output, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
sequence_heatmap = sequence_output # batch x seq_len x hidden
batch_size, seq_len, hid_size = sequence_heatmap.size()
start_extend = sequence_heatmap.unsqueeze(2).expand(-1, -1, seq_len, -1)
end_extend = sequence_heatmap.unsqueeze(1).expand(-1, seq_len, -1, -1)
# the shape of start_end_concat[0] is : batch x 1 x seq_len x 2*hidden
span_matrix = torch.cat([start_extend, end_extend], 3) # batch x seq_len x seq_len x 2*hidden
span_logits = self.span_embedding(span_matrix) # batch x seq_len x seq_len x 1
span_logits = torch.squeeze(span_logits) # batch x seq_len x seq_len
span_loss_fct = nn.BCEWithLogitsLoss()
span_loss = span_loss_fct(span_logits.view(batch_size, -1), span_positions.view(batch_size, -1).float())
這裏不是太理解,我個人的推想是算出各個位置token是start或者end 之後,通過矩陣計算哪個S和E是正確實體的S和E