【系列论文研读】Self-supervised learning:rotating,M&M tuning,Jigsaw++

How to learn from unlabeled volume data: Self-Supervised 3D Context Feature Learning

Author:Maximilian Blendowski(University of L¨ubeck, Germany)

提出的2.5D的自监督方法,预测不同slice之间某两点的相对位移。使用两个预测网络:直接数值预测和热力图方法。主要用于胸片数据。

 

Scaling and Benchmarking Self-Supervised Visual Representation Learning

Author:Priya Goyal 等(Facebook)

提出了几个self-supervised 的trick:

  1. increasing the size of pre-training data improves the transfer learning performance for both the Jigsaw and Colorization methods

  2. Pre-train 的模型越大越好(resnet50>Alexnet)

 

Self-supervised Spatiotemporal Feature Learning by Video Geometric Transformations

  • 处理数据:video
  • 方法:
    • a set of pre-designed geometric transformations (e.g. rotating 0°, 90°, 180°, and 270°) are applied to each video
    • 预测 transformations (e.g. rotations)

 

Mix-and-Match Tuning for Self-Supervised Semantic Segmentation(2017CVPR)

author: Xiaohang Zhan(港中文)

  • 分为3steps:1)pre-train learning by colorization;2)M&M tuning; 3)target segmentation task
  • M&M tuning:1)对图像采取图像块,去除严重重叠的图像块,根据标记的图像真值提取图像块对应的 unique class labels(比如车、人) ,将这些图像块全部混合在一起。2) fine-tuning the network by triplet loss

 

Boosting Self-Supervised Learning via Knowledge Transfer

  • 整体框架图
  • (a)中的pretext task是Jigsaw++ task
    • In Jigsaw++, we replace a random number of tiles in the grid (up to 2) with (occluding) tiles from another random image

 

DeepCluster

使用聚类对每个类产生伪标签,用伪标签作为预训练。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章