假如BERT系論文變成Commit History

最近,我在Twitter上發現了一個有趣的話題,假設有這樣一個場景,論文研究在GitHub上發佈,而後續論文則會提交與原始論文不同之處。在人工智能機器學習領域,信息過載一直是一個大問題,每個月都有大量新論文發表,這樣的通過commit history展示方式或許會給你帶來眼前一亮。
在這裏插入圖片描述
下面我們就來蹭蹭大明星BERT的熱度,來看看這一場景應用到BERT系論文會是什麼樣子的?

在這裏插入圖片描述

commit arXiv:1810.04805
Author: Devlin et al.
Date: Thu Oct 11 00:50:01 2018 +0000
Initial Commit: BERT
-Transformer Decoder
+Masked Language Modeling
+Next Sentence Prediction
+WordPiece 30K

commit arXiv:1901.07291
Author: Lample et al.
Date: Sun Nov 10 10:46:37 2019 +0000
Cross-lingual Language Model Pretraining
+Translation Language Modeling(TLM)
+Causal Language Modeling(CLM)

commit arXiv:1906.08237
Author: Yang et al.
Date: Wed Jun 19 17:35:48 2019 +0000
XLNet: Generalized Autoregressive Pretraining for Language Understanding
-Masked Language Modeling
-BERT Transformer
+Permutation Language Modeling
+Transformer-XL
+Two-stream self-attention

commit arXiv:1907.10529
Author: Joshi et al.
Date: Wed Jul 24 15:43:40 2019 +0000
SpanBERT: Improving Pre-training by Representing and Predicting Spans
-Random Token Masking
-Next Sentence Prediction
-Bi-sequence Training
+Continuous Span Masking
+Span-Boundary Objective(SBO)
+Single-Sequence Training

commit arXiv:1907.11692
Author: Liu et al.
Date: Fri Jul 26 17:48:29 2019 +0000
RoBERTa: A Robustly Optimized BERT Pretraining Approach
-Next Sentence Prediction
-Static Masking of Tokens
+Dynamic Masking of Tokens
+Byte Pair Encoding(BPE) 50K
+Large batch size
+CC-NEWS dataset

commit arXiv:1908.10084
Author: Reimers et al.
Date: Tue Aug 27 08:50:17 2019 +0000
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
+Siamese Network Structure
+Finetuning on SNLI and MNLI

commit arXiv:1909.11942
Author: Lan et al.
Date: Thu Sep 26 07:06:13 2019 +0000
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
-Next Sentence Prediction
+Sentence Order Prediction
+Cross-layer Parameter Sharing
+Factorized Embeddings

commit arXiv:1910.01108
Author: Sanh et al.
Date: Wed Oct 2 17:56:28 2019 +0000
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
-Next Sentence Prediction
-Token-Type Embeddings
-[CLS] pooling
+Knowledge Distillation
+Cosine Embedding Loss
+Dynamic Masking

commit arXiv:1911.03894
Author: Martin et al.
Date: Sun Nov 10 10:46:37 2019 +0000
CamemBERT: a Tasty French Language Model
-BERT
-English
+ROBERTA
+French OSCAR dataset(138GB)
+Whole-word Masking(WWM)
+SentencePiece Tokenizer

commit arXiv:1912.05372
Author: Le et al.
Date: Wed Dec 11 14:59:32 2019 +0000
FlauBERT: Unsupervised Language Model Pre-training for French
-BERT
-English
+ROBERTA
+fastBPE
+Stochastic Depth
+French dataset(71GB)
+FLUE(French Language Understanding Evaluation) benchmark

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章