【系列論文研讀】Self-supervised learning:rotating,M&M tuning,Jigsaw++

How to learn from unlabeled volume data: Self-Supervised 3D Context Feature Learning

Author:Maximilian Blendowski(University of L¨ubeck, Germany)

提出的2.5D的自監督方法,預測不同slice之間某兩點的相對位移。使用兩個預測網絡:直接數值預測和熱力圖方法。主要用於胸片數據。

 

Scaling and Benchmarking Self-Supervised Visual Representation Learning

Author:Priya Goyal 等(Facebook)

提出了幾個self-supervised 的trick:

  1. increasing the size of pre-training data improves the transfer learning performance for both the Jigsaw and Colorization methods

  2. Pre-train 的模型越大越好(resnet50>Alexnet)

 

Self-supervised Spatiotemporal Feature Learning by Video Geometric Transformations

  • 處理數據:video
  • 方法:
    • a set of pre-designed geometric transformations (e.g. rotating 0°, 90°, 180°, and 270°) are applied to each video
    • 預測 transformations (e.g. rotations)

 

Mix-and-Match Tuning for Self-Supervised Semantic Segmentation(2017CVPR)

author: Xiaohang Zhan(港中文)

  • 分爲3steps:1)pre-train learning by colorization;2)M&M tuning; 3)target segmentation task
  • M&M tuning:1)對圖像採取圖像塊,去除嚴重重疊的圖像塊,根據標記的圖像真值提取圖像塊對應的 unique class labels(比如車、人) ,將這些圖像塊全部混合在一起。2) fine-tuning the network by triplet loss

 

Boosting Self-Supervised Learning via Knowledge Transfer

  • 整體框架圖
  • (a)中的pretext task是Jigsaw++ task
    • In Jigsaw++, we replace a random number of tiles in the grid (up to 2) with (occluding) tiles from another random image

 

DeepCluster

使用聚類對每個類產生僞標籤,用僞標籤作爲預訓練。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章