【論文研讀】SimCLR Google2020.2提出最新自監督方法

TitleA Simple Framework for Contrastive Learning of Visual Representations

AuthorTing ChenGeoffrey Hinton... (Google Research)

參考:Hinton組力作:ImageNet無監督學習最佳性能一次提升7%,媲美監督學習

 

網絡結構

 

Data augmentation

1.Composition of data augmentation operations is crucial for learning good representations

2. No single transformation suffices to learn good representations

3. it is critical to compose cropping with color distortion

4. data augmentation that does not yield accuracy benefits for supervised learning can still help considerably with

contrastive learning.

 

Architectures for Encoder and Head

1. Unsupervised contrastive learning benefits (more) from bigger models

2. G(*):A nonlinear projection is better than a linear projection

3. Contrastive learning benefits (more) from larger batch sizes and longer training

 

Results

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章