300+篇CVPR 2020代碼開源的論文(轉載)

本文轉自https://github.com/amusi/CVPR2020-Code,詳細信息請跳轉至該網址,有更多驚喜。

前言

之前Amusi整理了1467篇CVPR 2020所有論文PDF下載資源,以及270篇CVPR 2020代碼開源論文項目,詳見:270篇CVPR 2020代碼開源的論文,https://github.com/amusi/daily-paper-computer-vision

CVPR 2020代碼開源項目一放出,得到不少CVers的關注,重點是:開源和根據方向分類。目前star數已經來到2000+,期間也有不少國內外的CVPR 2020論文作者提交issue,分享他們的工作。

在此再次更新數據,代碼開源的論文突破300+,項目還在持續更新,歡迎補充分享,也推薦大家學習:

https://github.com/amusi/CVPR2020-Code

注:下面內容很硬核,可以在CVer公衆號後臺回覆:CVPR2020,即可下載如下內容
CVPR2020-Code

CNN
圖像分類
目標檢測
3D目標檢測
視頻目標檢測
目標跟蹤
語義分割
實例分割
全景分割
視頻目標分割
超像素分割
NAS
GAN
Re-ID
3D點雲(分類/分割/配準/跟蹤等)
人臉(識別/檢測/重建等)
人體姿態估計(2D/3D)
人體解析
場景文本檢測
場景文本識別
特徵(點)檢測和描述
超分辨率
模型壓縮/剪枝
視頻理解/行爲識別
人羣計數
深度估計
6D目標姿態估計
手勢估計
顯著性檢測
去噪
去模糊
去霧
特徵點檢測與描述
視覺問答(VQA)
視頻問答(VideoQA)
視覺語言導航
視頻壓縮
視頻插幀
風格遷移
車道線檢測
"人-物"交互(HOI)檢測
軌跡預測
運動預測
光流估計
圖像檢索
虛擬試衣
HDR
對抗樣本
三維重建
深度補全
語義場景補全
圖像/視頻描述
線框解析
數據集
其他

CNN

Exploring Self-attention for Image Recognition

論文:https://hszhao.github.io/papers/cvpr20_san.pdf

代碼:https://github.com/hszhao/SAN

Improving Convolutional Networks with Self-Calibrated Convolutions

主頁:https://mmcheng.net/scconv/

論文:http://mftp.mmcheng.net/Papers/20cvprSCNet.pdf

代碼:https://github.com/backseason/SCNet

Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets

論文:https://arxiv.org/abs/2003.13549
代碼:https://github.com/zeiss-microscopy/BSConv

圖像分類

Compositional Convolutional Neural Networks: A Deep Architecture with Innate Robustness to Partial Occlusion

論文:https://arxiv.org/abs/2003.04490

代碼:https://github.com/AdamKortylewski/CompositionalNets

Spatially Attentive Output Layer for Image Classification

論文:https://arxiv.org/abs/2004.07570

代碼(好像被原作者刪除了):https://github.com/ildoonet/spatially-attentive-output-layer

目標檢測

Noise-Aware Fully Webly Supervised Object Detection

論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Shen_Noise-Aware_Fully_Webly_Supervised_Object_Detection_CVPR_2020_paper.html
代碼:https://github.com/shenyunhang/NA-fWebSOD/

Learning a Unified Sample Weighting Network for Object Detection

論文:https://arxiv.org/abs/2006.06568
代碼:https://github.com/caiqi/sample-weighting-network

D2Det: Towards High Quality Object Detection and Instance Segmentation

論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Cao_D2Det_Towards_High_Quality_Object_Detection_and_Instance_Segmentation_CVPR_2020_paper.pdf

代碼:https://github.com/JialeCao001/D2Det

Dynamic Refinement Network for Oriented and Densely Packed Object Detection

論文下載鏈接:https://arxiv.org/abs/2005.09973

代碼和數據集:https://github.com/Anymake/DRN_CVPR2020

Scale-Equalizing Pyramid Convolution for Object Detection

論文:https://arxiv.org/abs/2005.03101

代碼:https://github.com/jshilong/SEPC

Revisiting the Sibling Head in Object Detector

論文:https://arxiv.org/abs/2003.07540

代碼:https://github.com/Sense-X/TSD

Scale-equalizing Pyramid Convolution for Object Detection

論文:暫無
代碼:https://github.com/jshilong/SEPC

Detection in Crowded Scenes: One Proposal, Multiple Predictions

論文:https://arxiv.org/abs/2003.09163
代碼:https://github.com/megvii-model/CrowdDetection

Instance-aware, Context-focused, and Memory-efficient Weakly Supervised Object Detection

論文:https://arxiv.org/abs/2004.04725
代碼:https://github.com/NVlabs/wetectron

Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection

論文:https://arxiv.org/abs/1912.02424
代碼:https://github.com/sfzhang15/ATSS

BiDet: An Efficient Binarized Object Detector

論文:https://arxiv.org/abs/2003.03961
代碼:https://github.com/ZiweiWangTHU/BiDet

Harmonizing Transferability and Discriminability for Adapting Object Detectors

論文:https://arxiv.org/abs/2003.06297
代碼:https://github.com/chaoqichen/HTCN

CentripetalNet: Pursuing High-quality Keypoint Pairs for Object Detection

論文:https://arxiv.org/abs/2003.09119
代碼:https://github.com/KiveeDong/CentripetalNet

Hit-Detector: Hierarchical Trinity Architecture Search for Object Detection

論文:https://arxiv.org/abs/2003.11818
代碼:https://github.com/ggjy/HitDet.pytorch

EfficientDet: Scalable and Efficient Object Detection

論文:https://arxiv.org/abs/1911.09070
代碼:https://github.com/google/automl/tree/master/efficientdet

3D目標檢測

Structure Aware Single-stage 3D Object Detection from Point Cloud

論文:http://openaccess.thecvf.com/content_CVPR_2020/html/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.html

代碼:https://github.com/skyhehe123/SA-SSD

IDA-3D: Instance-Depth-Aware 3D Object Detection from Stereo Vision for Autonomous Driving

論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Peng_IDA-3D_Instance-Depth-Aware_3D_Object_Detection_From_Stereo_Vision_for_Autonomous_CVPR_2020_paper.pdf

代碼:https://github.com/swords123/IDA-3D

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize

論文:https://arxiv.org/abs/2005.08139

代碼:https://github.com/cxy1997/3D_adapt_auto_driving

MLCVNet: Multi-Level Context VoteNet for 3D Object Detection

論文:https://arxiv.org/abs/2004.05679
代碼:https://github.com/NUAAXQ/MLCVNet

3DSSD: Point-based 3D Single Stage Object Detector

CVPR 2020 Oral

論文:https://arxiv.org/abs/2002.10187

代碼:https://github.com/tomztyang/3DSSD

Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation

論文:https://arxiv.org/abs/2004.03572

代碼:https://github.com/zju3dv/disprcn

End-to-End Pseudo-LiDAR for Image-Based 3D Object Detection

論文:https://arxiv.org/abs/2004.03080

代碼:https://github.com/mileyan/pseudo-LiDAR_e2e

DSGN: Deep Stereo Geometry Network for 3D Object Detection

論文:https://arxiv.org/abs/2001.03398
代碼:https://github.com/chenyilun95/DSGN

LiDAR-based Online 3D Video Object Detection with Graph-based Message Passing and Spatiotemporal Transformer Attention

論文:https://arxiv.org/abs/2004.01389
代碼:https://github.com/yinjunbo/3DVID

PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection

論文:https://arxiv.org/abs/1912.13192

代碼:https://github.com/sshaoshuai/PV-RCNN

Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud

論文:https://arxiv.org/abs/2003.01251
代碼:https://github.com/WeijingShi/Point-GNN

視頻目標檢測

Memory Enhanced Global-Local Aggregation for Video Object Detection

論文:https://arxiv.org/abs/2003.12063

代碼:https://github.com/Scalsol/mega.pytorch

目標跟蹤

SiamCAR: Siamese Fully Convolutional Classification and Regression for Visual Tracking

論文:https://arxiv.org/abs/1911.07241
代碼:https://github.com/ohhhyeahhh/SiamCAR

D3S – A Discriminative Single Shot Segmentation Tracker

論文:https://arxiv.org/abs/1911.08862
代碼:https://github.com/alanlukezic/d3s

ROAM: Recurrently Optimizing Tracking Model

論文:https://arxiv.org/abs/1907.12006

代碼:https://github.com/skyoung/ROAM

Siam R-CNN: Visual Tracking by Re-Detection

主頁:https://www.vision.rwth-aachen.de/page/siamrcnn
論文:https://arxiv.org/abs/1911.12836
論文2:https://www.vision.rwth-aachen.de/media/papers/192/siamrcnn.pdf
代碼:https://github.com/VisualComputingInstitute/SiamR-CNN

Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises

論文:https://arxiv.org/abs/2003.09595
代碼:https://github.com/MasterBin-IIAU/CSA

High-Performance Long-Term Tracking with Meta-Updater

論文:https://arxiv.org/abs/2004.00305

代碼:https://github.com/Daikenan/LTMU

AutoTrack: Towards High-Performance Visual Tracking for UAV with Automatic Spatio-Temporal Regularization

論文:https://arxiv.org/abs/2003.12949

代碼:https://github.com/vision4robotics/AutoTrack

Probabilistic Regression for Visual Tracking

論文:https://arxiv.org/abs/2003.12565
代碼:https://github.com/visionml/pytracking

MAST: A Memory-Augmented Self-supervised Tracker

論文:https://arxiv.org/abs/2002.07793
代碼:https://github.com/zlai0/MAST

Siamese Box Adaptive Network for Visual Tracking

論文:https://arxiv.org/abs/2003.06761

代碼:https://github.com/hqucv/siamban

語義分割

Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation

論文:暫無

代碼:https://github.com/JianqiangWan/Super-BPD

Single-Stage Semantic Segmentation from Image Labels

論文:https://arxiv.org/abs/2005.08104

代碼:https://github.com/visinf/1-stage-wseg

Learning Texture Invariant Representation for Domain Adaptation of Semantic Segmentation

論文:https://arxiv.org/abs/2003.00867
代碼:https://github.com/MyeongJin-Kim/Learning-Texture-Invariant-Representation

MSeg: A Composite Dataset for Multi-domain Semantic Segmentation

論文:http://vladlen.info/papers/MSeg.pdf
代碼:https://github.com/mseg-dataset/mseg-api

CascadePSP: Toward Class-Agnostic and Very High-Resolution Segmentation via Global and Local Refinement

論文:https://arxiv.org/abs/2005.02551
代碼:https://github.com/hkchengrex/CascadePSP

Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision

Oral
論文:https://arxiv.org/abs/2004.07703
代碼:https://github.com/feipan664/IntraDA

Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation

論文:https://arxiv.org/abs/2004.04581
代碼:https://github.com/YudeWang/SEAM

Temporally Distributed Networks for Fast Video Segmentation

論文:https://arxiv.org/abs/2004.01800

代碼:https://github.com/feinanshan/TDNet

Context Prior for Scene Segmentation

論文:https://arxiv.org/abs/2004.01547

代碼:https://git.io/ContextPrior

Strip Pooling: Rethinking Spatial Pooling for Scene Parsing

論文:https://arxiv.org/abs/2003.13328

代碼:https://github.com/Andrew-Qibin/SPNet

Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks

論文:https://arxiv.org/abs/2003.05128
代碼:https://github.com/shachoi/HANet

Learning Dynamic Routing for Semantic Segmentation

論文:https://arxiv.org/abs/2003.10401

代碼:https://github.com/yanwei-li/DynamicRouting

實例分割

D2Det: Towards High Quality Object Detection and Instance Segmentation

論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Cao_D2Det_Towards_High_Quality_Object_Detection_and_Instance_Segmentation_CVPR_2020_paper.pdf

代碼:https://github.com/JialeCao001/D2Det

PolarMask: Single Shot Instance Segmentation with Polar Representation

論文:https://arxiv.org/abs/1909.13226
代碼:https://github.com/xieenze/PolarMask
解讀:https://zhuanlan.zhihu.com/p/84890413

CenterMask : Real-Time Anchor-Free Instance Segmentation

論文:https://arxiv.org/abs/1911.06667
代碼:https://github.com/youngwanLEE/CenterMask

BlendMask: Top-Down Meets Bottom-Up for Instance Segmentation

論文:https://arxiv.org/abs/2001.00309
代碼:https://github.com/aim-uofa/AdelaiDet

Deep Snake for Real-Time Instance Segmentation

論文:https://arxiv.org/abs/2001.01629
代碼:https://github.com/zju3dv/snake

Mask Encoding for Single Shot Instance Segmentation

論文:https://arxiv.org/abs/2003.11712

代碼:https://github.com/aim-uofa/AdelaiDet

全景分割

Pixel Consensus Voting for Panoptic Segmentation

論文:https://arxiv.org/abs/2004.01849
代碼:還未公佈

BANet: Bidirectional Aggregation Network with Occlusion Handling for Panoptic Segmentation

論文:https://arxiv.org/abs/2003.14031

代碼:https://github.com/Mooonside/BANet

視頻目標分割

A Transductive Approach for Video Object Segmentation

論文:https://arxiv.org/abs/2004.07193

代碼:https://github.com/microsoft/transductive-vos.pytorch

State-Aware Tracker for Real-Time Video Object Segmentation

論文:https://arxiv.org/abs/2003.00482

代碼:https://github.com/MegviiDetection/video_analyst

Learning Fast and Robust Target Models for Video Object Segmentation

論文:https://arxiv.org/abs/2003.00908
代碼:https://github.com/andr345/frtm-vos

Learning Video Object Segmentation from Unlabeled Videos

論文:https://arxiv.org/abs/2003.05020
代碼:https://github.com/carrierlxk/MuG

超像素分割

Superpixel Segmentation with Fully Convolutional Networks

論文:https://arxiv.org/abs/2003.12929
代碼:https://github.com/fuy34/superpixel_fcn

NAS

AOWS: Adaptive and optimal network width search with latency constraints

論文:https://arxiv.org/abs/2005.10481
代碼:https://github.com/bermanmaxim/AOWS

Densely Connected Search Space for More Flexible Neural Architecture Search

論文:https://arxiv.org/abs/1906.09607

代碼:https://github.com/JaminFong/DenseNAS

MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning

論文:https://arxiv.org/abs/2003.14058

代碼:https://github.com/bhpfelix/MTLNAS

FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions

論文下載鏈接:https://arxiv.org/abs/2004.05565

代碼:https://github.com/facebookresearch/mobile-vision

Neural Architecture Search for Lightweight Non-Local Networks

論文:https://arxiv.org/abs/2004.01961
代碼:https://github.com/LiYingwei/AutoNL

Rethinking Performance Estimation in Neural Architecture Search

論文:https://arxiv.org/abs/2005.09917
代碼:https://github.com/zhengxiawu/rethinking_performance_estimation_in_NAS
解讀1:https://www.zhihu.com/question/372070853/answer/1035234510
解讀2:https://zhuanlan.zhihu.com/p/111167409

CARS: Continuous Evolution for Efficient Neural Architecture Search

論文:https://arxiv.org/abs/1909.04977
代碼(即將開源):https://github.com/huawei-noah/CARS

GAN

Distribution-induced Bidirectional Generative Adversarial Network for Graph Representation Learning

論文:https://arxiv.org/abs/1912.01899
代碼:https://github.com/SsGood/DBGAN

PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer

論文:https://arxiv.org/abs/1909.06956
代碼:https://github.com/wtjiang98/PSGAN

Semantically Mutil-modal Image Synthesis

主頁:http://seanseattle.github.io/SMIS
論文:https://arxiv.org/abs/2003.12697
代碼:https://github.com/Seanseattle/SMIS

Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping

論文:https://yiranran.github.io/files/CVPR2020_Unpaired%20Portrait%20Drawing%20Generation%20via%20Asymmetric%20Cycle%20Mapping.pdf
代碼:https://github.com/yiranran/Unpaired-Portrait-Drawing

Learning to Cartoonize Using White-box Cartoon Representations

論文:https://github.com/SystemErrorWang/White-box-Cartoonization/blob/master/paper/06791.pdf

主頁:https://systemerrorwang.github.io/White-box-Cartoonization/

代碼:https://github.com/SystemErrorWang/White-box-Cartoonization

解讀:https://zhuanlan.zhihu.com/p/117422157

Demo視頻:https://www.bilibili.com/video/av56708333

GAN Compression: Efficient Architectures for Interactive Conditional GANs

論文:https://arxiv.org/abs/2003.08936

代碼:https://github.com/mit-han-lab/gan-compression

Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce Spectral Distributions

論文:https://arxiv.org/abs/2003.01826
代碼:https://github.com/cc-hpc-itwm/UpConv

Re-ID

COCAS: A Large-Scale Clothes Changing Person Dataset for Re-identification

論文:https://arxiv.org/abs/2005.07862

數據集:暫無

Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

論文:https://arxiv.org/abs/2004.04199

代碼:https://github.com/whj363636/Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking

Pose-guided Visible Part Matching for Occluded Person ReID

論文:https://arxiv.org/abs/2004.00230
代碼:https://github.com/hh23333/PVPM

Weakly supervised discriminative feature learning with state information for person identification

論文:https://arxiv.org/abs/2002.11939
代碼:https://github.com/KovenYu/state-information

3D點雲(分類/分割/配準等)
3D點雲卷積

PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling

論文:https://arxiv.org/abs/2003.00492
代碼:https://github.com/yanx27/PointASNL

Global-Local Bidirectional Reasoning for Unsupervised Representation Learning of 3D Point Clouds

論文下載鏈接:https://arxiv.org/abs/2003.12971

代碼:https://github.com/raoyongming/PointGLR

Grid-GCN for Fast and Scalable Point Cloud Learning

論文:https://arxiv.org/abs/1912.02984

代碼:https://github.com/Xharlie/Grid-GCN

FPConv: Learning Local Flattening for Point Convolution

論文:https://arxiv.org/abs/2002.10701
代碼:https://github.com/lyqun/FPConv

3D點雲分類

PointAugment: an Auto-Augmentation Framework for Point Cloud Classification

論文:https://arxiv.org/abs/2002.10876
代碼(即將開源):https://github.com/liruihui/PointAugment/

3D點雲語義分割

RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds

論文:https://arxiv.org/abs/1911.11236

代碼:https://github.com/QingyongHu/RandLA-Net

解讀:https://zhuanlan.zhihu.com/p/105433460

Weakly Supervised Semantic Point Cloud Segmentation:Towards 10X Fewer Labels

論文:https://arxiv.org/abs/2004.0409

代碼:https://github.com/alex-xun-xu/WeakSupPointCloudSeg

PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation

論文:https://arxiv.org/abs/2003.14032
代碼:https://github.com/edwardzhou130/PolarSeg

Learning to Segment 3D Point Clouds in 2D Image Space

論文:https://arxiv.org/abs/2003.05593

代碼:https://github.com/WPI-VISLab/Learning-to-Segment-3D-Point-Clouds-in-2D-Image-Space

3D點雲實例分割

PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation

論文:https://arxiv.org/abs/2004.01658
代碼:https://github.com/Jia-Research-Lab/PointGroup

3D點雲配準

D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features

論文:https://arxiv.org/abs/2003.03164
代碼:https://github.com/XuyangBai/D3Feat

RPM-Net: Robust Point Matching using Learned Features

論文:https://arxiv.org/abs/2003.13479
代碼:https://github.com/yewzijian/RPMNet

3D點雲補全

Cascaded Refinement Network for Point Cloud Completion

論文:https://arxiv.org/abs/2004.03327
代碼:https://github.com/xiaogangw/cascaded-point-completion

3D點雲目標跟蹤

P2B: Point-to-Box Network for 3D Object Tracking in Point Clouds

論文:https://arxiv.org/abs/2005.13888
代碼:https://github.com/HaozheQi/P2B

人臉
人臉識別

CurricularFace: Adaptive Curriculum Learning Loss for Deep Face Recognition

論文:https://arxiv.org/abs/2004.00288

代碼:https://github.com/HuangYG123/CurricularFace

Learning Meta Face Recognition in Unseen Domains

論文:https://arxiv.org/abs/2003.07733
代碼:https://github.com/cleardusk/MFR
解讀:https://mp.weixin.qq.com/s/YZoEnjpnlvb90qSI3xdJqQ

人臉檢測
人臉活體檢測

Searching Central Difference Convolutional Networks for Face Anti-Spoofing

論文:https://arxiv.org/abs/2003.04092

代碼:https://github.com/ZitongYu/CDCN

人臉表情識別

Suppressing Uncertainties for Large-Scale Facial Expression Recognition

論文:https://arxiv.org/abs/2002.10392

代碼(即將開源):https://github.com/kaiwang960112/Self-Cure-Network

人臉轉正

Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images

論文:https://arxiv.org/abs/2003.08124
代碼:https://github.com/Hangz-nju-cuhk/Rotate-and-Render

人臉3D重建

AvatarMe: Realistically Renderable 3D Facial Reconstruction “in-the-wild”

論文:https://arxiv.org/abs/2003.13845
數據集:https://github.com/lattas/AvatarMe

FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction

論文:https://arxiv.org/abs/2003.13989
代碼:https://github.com/zhuhao-nju/facescape

人體姿態估計(2D/3D)
2D人體姿態估計

HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation

論文:https://arxiv.org/abs/1908.10357
代碼:https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation

The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation

論文:https://arxiv.org/abs/1911.07524
代碼:https://github.com/HuangJunJie2017/UDP-Pose
解讀:https://zhuanlan.zhihu.com/p/92525039

Distribution-Aware Coordinate Representation for Human Pose Estimation

主頁:https://ilovepose.github.io/coco/

論文:https://arxiv.org/abs/1910.06278

代碼:https://github.com/ilovepose/DarkPose

3D人體姿態估計

Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach

主頁:https://www.zhe-zhang.com/cvpr2020

論文:https://arxiv.org/abs/2003.11163

代碼:https://github.com/CHUNYUWANG/imu-human-pose-pytorch

Bodies at Rest: 3D Human Pose and Shape Estimation from a Pressure Image using Synthetic Data

論文下載鏈接:https://arxiv.org/abs/2004.01166

代碼:https://github.com/Healthcare-Robotics/bodies-at-rest

數據集:https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/KOA4ML

Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image Synthesis

主頁:http://val.cds.iisc.ac.in/pgp-human/
論文:https://arxiv.org/abs/2004.04400

Compressed Volumetric Heatmaps for Multi-Person 3D Pose Estimation

論文:https://arxiv.org/abs/2004.00329
代碼:https://github.com/fabbrimatteo/LoCO

VIBE: Video Inference for Human Body Pose and Shape Estimation

論文:https://arxiv.org/abs/1912.05656
代碼:https://github.com/mkocabas/VIBE

Back to the Future: Joint Aware Temporal Deep Learning 3D Human Pose Estimation

論文:https://arxiv.org/abs/2002.11251
代碼:https://github.com/vnmr/JointVideoPose3D

Cross-View Tracking for Multi-Human 3D Pose Estimation at over 100 FPS

論文:https://arxiv.org/abs/2003.03972
數據集:暫無

人體解析

Correlating Edge, Pose with Parsing

論文:https://arxiv.org/abs/2005.01431

代碼:https://github.com/ziwei-zh/CorrPM

場景文本檢測

ContourNet: Taking a Further Step Toward Accurate Arbitrary-Shaped Scene Text Detection

論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_ContourNet_Taking_a_Further_Step_Toward_Accurate_Arbitrary-Shaped_Scene_Text_CVPR_2020_paper.pdf
代碼:https://github.com/wangyuxin87/ContourNet

UnrealText: Synthesizing Realistic Scene Text Images from the Unreal World

論文:https://arxiv.org/abs/2003.10608
代碼和數據集:https://github.com/Jyouhou/UnrealText/

ABCNet: Real-time Scene Text Spotting with Adaptive Bezier-Curve Network

論文:https://arxiv.org/abs/2002.10200
代碼(即將開源):https://github.com/Yuliang-Liu/bezier_curve_text_spotting
代碼(即將開源):https://github.com/aim-uofa/adet

Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection

論文:https://arxiv.org/abs/2003.07493

代碼:https://github.com/GXYM/DRRG

場景文本識別

SEED: Semantics Enhanced Encoder-Decoder Framework for Scene Text Recognition

論文:https://arxiv.org/abs/2005.10977
代碼:https://github.com/Pay20Y/SEED

UnrealText: Synthesizing Realistic Scene Text Images from the Unreal World

論文:https://arxiv.org/abs/2003.10608
代碼和數據集:https://github.com/Jyouhou/UnrealText/

ABCNet: Real-time Scene Text Spotting with Adaptive Bezier-Curve Network

論文:https://arxiv.org/abs/2002.10200
代碼(即將開源):https://github.com/aim-uofa/adet

Learn to Augment: Joint Data Augmentation and Network Optimization for Text Recognition

論文:https://arxiv.org/abs/2003.06606

代碼:https://github.com/Canjie-Luo/Text-Image-Augmentation

特徵(點)檢測和描述

SuperGlue: Learning Feature Matching with Graph Neural Networks

論文:https://arxiv.org/abs/1911.11763
代碼:https://github.com/magicleap/SuperGluePretrainedNetwork

超分辨率
圖像超分辨率

Closed-Loop Matters: Dual Regression Networks for Single Image Super-Resolution

論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Closed-Loop_Matters_Dual_Regression_Networks_for_Single_Image_Super-Resolution_CVPR_2020_paper.html
代碼:https://github.com/guoyongcs/DRN

Learning Texture Transformer Network for Image Super-Resolution

論文:https://arxiv.org/abs/2006.04139

代碼:https://github.com/FuzhiYang/TTSR

Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining

論文:https://arxiv.org/abs/2006.01424
代碼:https://github.com/SHI-Labs/Cross-Scale-Non-Local-Attention

Structure-Preserving Super Resolution with Gradient Guidance

論文:https://arxiv.org/abs/2003.13081

代碼:https://github.com/Maclory/SPSR

Rethinking Data Augmentation for Image Super-resolution: A Comprehensive Analysis and a New Strategy

論文:https://arxiv.org/abs/2004.00448

代碼:https://github.com/clovaai/cutblur
視頻超分辨率

TDAN: Temporally-Deformable Alignment Network for Video Super-Resolution

論文:https://arxiv.org/abs/1812.02898
代碼:https://github.com/YapengTian/TDAN-VSR-CVPR-2020

Space-Time-Aware Multi-Resolution Video Enhancement

主頁:https://alterzero.github.io/projects/STAR.html
論文:http://arxiv.org/abs/2003.13170
代碼:https://github.com/alterzero/STARnet

Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution

論文:https://arxiv.org/abs/2002.11616
代碼:https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020

模型壓縮/剪枝

DMCP: Differentiable Markov Channel Pruning for Neural Networks

論文:https://arxiv.org/abs/2005.03354
代碼:https://github.com/zx55/dmcp

Forward and Backward Information Retention for Accurate Binary Neural Networks

論文:https://arxiv.org/abs/1909.10788

代碼:https://github.com/htqin/IR-Net

Towards Efficient Model Compression via Learned Global Ranking

論文:https://arxiv.org/abs/1904.12368
代碼:https://github.com/cmu-enyac/LeGR

HRank: Filter Pruning using High-Rank Feature Map

論文:http://arxiv.org/abs/2002.10179
代碼:https://github.com/lmbxmu/HRank

GAN Compression: Efficient Architectures for Interactive Conditional GANs

論文:https://arxiv.org/abs/2003.08936

代碼:https://github.com/mit-han-lab/gan-compression

Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression

論文:https://arxiv.org/abs/2003.08935

代碼:https://github.com/ofsoundof/group_sparsity

視頻理解/行爲識別

Oops! Predicting Unintentional Action in Video

主頁:https://oops.cs.columbia.edu/

論文:https://arxiv.org/abs/1911.11206

代碼:https://github.com/cvlab-columbia/oops

數據集:https://oops.cs.columbia.edu/data

PREDICT & CLUSTER: Unsupervised Skeleton Based Action Recognition

論文:https://arxiv.org/abs/1911.12409
代碼:https://github.com/shlizee/Predict-Cluster

Intra- and Inter-Action Understanding via Temporal Action Parsing

論文:https://arxiv.org/abs/2005.10229
主頁和數據集:https://sdolivia.github.io/TAPOS/

3DV: 3D Dynamic Voxel for Action Recognition in Depth Video

論文:https://arxiv.org/abs/2005.05501
代碼:https://github.com/3huo/3DV-Action

FineGym: A Hierarchical Video Dataset for Fine-grained Action Understanding

主頁:https://sdolivia.github.io/FineGym/
論文:https://arxiv.org/abs/2004.06704

TEA: Temporal Excitation and Aggregation for Action Recognition

論文:https://arxiv.org/abs/2004.01398

代碼:https://github.com/Phoenix1327/tea-action-recognition

X3D: Expanding Architectures for Efficient Video Recognition

論文:https://arxiv.org/abs/2004.04730

代碼:https://github.com/facebookresearch/SlowFast

Temporal Pyramid Network for Action Recognition

主頁:https://decisionforce.github.io/TPN

論文:https://arxiv.org/abs/2004.03548

代碼:https://github.com/decisionforce/TPN

基於骨架的動作識別

Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition

論文:https://arxiv.org/abs/2003.14111
代碼:https://github.com/kenziyuliu/ms-g3d

人羣計數

深度估計

BiFuse: Monocular 360◦ Depth Estimation via Bi-Projection Fusion

論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_BiFuse_Monocular_360_Depth_Estimation_via_Bi-Projection_Fusion_CVPR_2020_paper.pdf
代碼:https://github.com/Yeh-yu-hsuan/BiFuse

Focus on defocus: bridging the synthetic to real domain gap for depth estimation

論文:https://arxiv.org/abs/2005.09623
代碼:https://github.com/dvl-tum/defocus-net

Bi3D: Stereo Depth Estimation via Binary Classifications

論文:https://arxiv.org/abs/2005.07274

代碼:https://github.com/NVlabs/Bi3D

AANet: Adaptive Aggregation Network for Efficient Stereo Matching

論文:https://arxiv.org/abs/2004.09548
代碼:https://github.com/haofeixu/aanet

Towards Better Generalization: Joint Depth-Pose Learning without PoseNet

論文:https://github.com/B1ueber2y/TrianFlow

代碼:https://github.com/B1ueber2y/TrianFlow

單目深度估計

On the uncertainty of self-supervised monocular depth estimation

論文:https://arxiv.org/abs/2005.06209
代碼:https://github.com/mattpoggi/mono-uncertainty

3D Packing for Self-Supervised Monocular Depth Estimation

論文:https://arxiv.org/abs/1905.02693
代碼:https://github.com/TRI-ML/packnet-sfm
Demo視頻:https://www.bilibili.com/video/av70562892/

Domain Decluttering: Simplifying Images to Mitigate Synthetic-Real Domain Shift and Improve Depth Estimation

論文:https://arxiv.org/abs/2002.12114
代碼:https://github.com/yzhao520/ARC

6D目標姿態估計

MoreFusion: Multi-object Reasoning for 6D Pose Estimation from Volumetric Fusion

論文:https://arxiv.org/abs/2004.04336
代碼:https://github.com/wkentaro/morefusion

EPOS: Estimating 6D Pose of Objects with Symmetries

主頁:http://cmp.felk.cvut.cz/epos

論文:https://arxiv.org/abs/2004.00605

G2L-Net: Global to Local Network for Real-time 6D Pose Estimation with Embedding Vector Features

論文:https://arxiv.org/abs/2003.11089

代碼:https://github.com/DC1991/G2L_Net

手勢估計

HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation

論文:https://arxiv.org/abs/2004.00060

主頁:http://vision.sice.indiana.edu/projects/hopenet

Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data

論文:https://arxiv.org/abs/2003.09572

代碼:https://github.com/CalciferZh/minimal-hand

顯著性檢測

JL-DCF: Joint Learning and Densely-Cooperative Fusion Framework for RGB-D Salient Object Detection

論文:https://arxiv.org/abs/2004.08515

代碼:https://github.com/kerenfu/JLDCF/

UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional Variational Autoencoders

主頁:http://dpfan.net/d3netbenchmark/

論文:https://arxiv.org/abs/2004.05763

代碼:https://github.com/JingZhang617/UCNet

去噪

A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising

論文:https://arxiv.org/abs/2003.12751

代碼:https://github.com/Vandermode/NoiseModel

CycleISP: Real Image Restoration via Improved Data Synthesis

論文:https://arxiv.org/abs/2003.07761

代碼:https://github.com/swz30/CycleISP

去雨

Multi-Scale Progressive Fusion Network for Single Image Deraining

論文:https://arxiv.org/abs/2003.10985

代碼:https://github.com/kuihua/MSPFN

去模糊
視頻去模糊

Cascaded Deep Video Deblurring Using Temporal Sharpness Prior

主頁:https://csbhr.github.io/projects/cdvd-tsp/index.html
論文:https://arxiv.org/abs/2004.02501
代碼:https://github.com/csbhr/CDVD-TSP

去霧

Multi-Scale Boosted Dehazing Network with Dense Feature Fusion

論文:https://arxiv.org/abs/2004.13388

代碼:https://github.com/BookerDeWitt/MSBDN-DFF

特徵點檢測與描述

ASLFeat: Learning Local Features of Accurate Shape and Localization

論文:https://arxiv.org/abs/2003.10071

代碼:https://github.com/lzx551402/aslfeat

視覺問答(VQA)

VC R-CNN:Visual Commonsense R-CNN

論文:https://arxiv.org/abs/2002.12204
代碼:https://github.com/Wangt-CN/VC-R-CNN

視頻問答(VideoQA)

Hierarchical Conditional Relation Networks for Video Question Answering

論文:https://arxiv.org/abs/2002.10698
代碼:https://github.com/thaolmk54/hcrn-videoqa

視覺語言導航

Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training

論文:https://arxiv.org/abs/2002.10638
代碼(即將開源):https://github.com/weituo12321/PREVALENT

視頻壓縮

Learning for Video Compression with Hierarchical Quality and Recurrent Enhancement

論文:https://arxiv.org/abs/2003.01966
代碼:https://github.com/RenYang-home/HLVC

視頻插幀

FeatureFlow: Robust Video Interpolation via Structure-to-Texture Generation

論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Gui_FeatureFlow_Robust_Video_Interpolation_via_Structure-to-Texture_Generation_CVPR_2020_paper.html

代碼:https://github.com/CM-BF/FeatureFlow

Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution

論文:https://arxiv.org/abs/2002.11616
代碼:https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020

Space-Time-Aware Multi-Resolution Video Enhancement

主頁:https://alterzero.github.io/projects/STAR.html
論文:http://arxiv.org/abs/2003.13170
代碼:https://github.com/alterzero/STARnet

Scene-Adaptive Video Frame Interpolation via Meta-Learning

論文:https://arxiv.org/abs/2004.00779
代碼:https://github.com/myungsub/meta-interpolation

Softmax Splatting for Video Frame Interpolation

主頁:http://sniklaus.com/papers/softsplat
論文:https://arxiv.org/abs/2003.05534
代碼:https://github.com/sniklaus/softmax-splatting

風格遷移

Diversified Arbitrary Style Transfer via Deep Feature Perturbation

論文:https://arxiv.org/abs/1909.08223
代碼:https://github.com/EndyWon/Deep-Feature-Perturbation

Collaborative Distillation for Ultra-Resolution Universal Style Transfer

論文:https://arxiv.org/abs/2003.08436

代碼:https://github.com/mingsun-tse/collaborative-distillation

車道線檢測

Inter-Region Affinity Distillation for Road Marking Segmentation

論文:https://arxiv.org/abs/2004.05304
代碼:https://github.com/cardwing/Codes-for-IntRA-KD

"人-物"交互(HOT)檢測

PPDM: Parallel Point Detection and Matching for Real-time Human-Object Interaction Detection

論文:https://arxiv.org/abs/1912.12898
代碼:https://github.com/YueLiao/PPDM

Detailed 2D-3D Joint Representation for Human-Object Interaction

論文:https://arxiv.org/abs/2004.08154

代碼:https://github.com/DirtyHarryLYL/DJ-RN

Cascaded Human-Object Interaction Recognition

論文:https://arxiv.org/abs/2003.04262

代碼:https://github.com/tfzhou/C-HOI

VSGNet: Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions

論文:https://arxiv.org/abs/2003.05541
代碼:https://github.com/ASMIftekhar/VSGNet

軌跡預測

The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction

論文:https://arxiv.org/abs/1912.06445
代碼:https://github.com/JunweiLiang/Multiverse
數據集:https://next.cs.cmu.edu/multiverse/

Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction

論文:https://arxiv.org/abs/2002.11927
代碼:https://github.com/abduallahmohamed/Social-STGCNN

運動預測

Collaborative Motion Prediction via Neural Motion Message Passing

論文:https://arxiv.org/abs/2003.06594
代碼:https://github.com/PhyllisH/NMMP

MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps

論文:https://arxiv.org/abs/2003.06754

代碼:https://github.com/pxiangwu/MotionNet

光流估計

Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation

論文:https://arxiv.org/abs/2003.13045
代碼:https://github.com/lliuz/ARFlow

圖像檢索

Evade Deep Image Retrieval by Stashing Private Images in the Hash Space

論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Xiao_Evade_Deep_Image_Retrieval_by_Stashing_Private_Images_in_the_CVPR_2020_paper.html
代碼:https://github.com/sugarruy/hashstash

虛擬試衣

Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content

論文:https://arxiv.org/abs/2003.05863
代碼:https://github.com/switchablenorms/DeepFashion_Try_On

HDR

Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline

主頁:https://www.cmlab.csie.ntu.edu.tw/~yulunliu/SingleHDR

論文下載鏈接:https://www.cmlab.csie.ntu.edu.tw/~yulunliu/SingleHDR_/00942.pdf

代碼:https://github.com/alex04072000/SingleHDR

對抗樣本

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

論文:https://arxiv.org/abs/1911.02466
代碼:https://github.com/ZhengyuZhao/PerC-Adversarial

三維重建

Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild

CVPR 2020 Best Paper
主頁:https://elliottwu.com/projects/unsup3d/
論文:https://arxiv.org/abs/1911.11130
代碼:https://github.com/elliottwu/unsup3d

Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

主頁:https://shunsukesaito.github.io/PIFuHD/
論文:https://arxiv.org/abs/2004.00452
代碼:https://github.com/facebookresearch/pifuhd

深度補全

Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning to End

論文:https://arxiv.org/abs/2006.03349

代碼:https://github.com/abdo-eldesokey/pncnn

語義場景補全

3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure Prior

論文:https://arxiv.org/abs/2003.14052
代碼:https://github.com/charlesCXK/3D-SketchAware-SSC

圖像/視頻描述

Syntax-Aware Action Targeting for Video Captioning

論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Zheng_Syntax-Aware_Action_Targeting_for_Video_Captioning_CVPR_2020_paper.pdf
代碼:https://github.com/SydCaption/SAAT

線框解析

Holistically-Attracted Wireframe Parser

論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Xue_Holistically-Attracted_Wireframe_Parsing_CVPR_2020_paper.html

代碼:https://github.com/cherubicXN/hawp

數據集

Oops! Predicting Unintentional Action in Video

主頁:https://oops.cs.columbia.edu/

論文:https://arxiv.org/abs/1911.11206

代碼:https://github.com/cvlab-columbia/oops

數據集:https://oops.cs.columbia.edu/data

The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction

論文:https://arxiv.org/abs/1912.06445
代碼:https://github.com/JunweiLiang/Multiverse
數據集:https://next.cs.cmu.edu/multiverse/

Open Compound Domain Adaptation

主頁:https://liuziwei7.github.io/projects/CompoundDomain.html
數據集:https://drive.google.com/drive/folders/1_uNTF8RdvhS_sqVTnYx17hEOQpefmE2r?usp=sharing
論文:https://arxiv.org/abs/1909.03403
代碼:https://github.com/zhmiao/OpenCompoundDomainAdaptation-OCDA

Intra- and Inter-Action Understanding via Temporal Action Parsing

論文:https://arxiv.org/abs/2005.10229
主頁和數據集:https://sdolivia.github.io/TAPOS/

Dynamic Refinement Network for Oriented and Densely Packed Object Detection

論文下載鏈接:https://arxiv.org/abs/2005.09973

代碼和數據集:https://github.com/Anymake/DRN_CVPR2020

COCAS: A Large-Scale Clothes Changing Person Dataset for Re-identification

論文:https://arxiv.org/abs/2005.07862

數據集:暫無

KeypointNet: A Large-scale 3D Keypoint Dataset Aggregated from Numerous Human Annotations

論文:https://arxiv.org/abs/2002.12687

數據集:https://github.com/qq456cvb/KeypointNet

MSeg: A Composite Dataset for Multi-domain Semantic Segmentation

論文:http://vladlen.info/papers/MSeg.pdf
代碼:https://github.com/mseg-dataset/mseg-api
數據集:https://github.com/mseg-dataset/mseg-semantic

AvatarMe: Realistically Renderable 3D Facial Reconstruction “in-the-wild”

論文:https://arxiv.org/abs/2003.13845
數據集:https://github.com/lattas/AvatarMe

Learning to Autofocus

論文:https://arxiv.org/abs/2004.12260
數據集:暫無

FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction

論文:https://arxiv.org/abs/2003.13989
代碼:https://github.com/zhuhao-nju/facescape

Bodies at Rest: 3D Human Pose and Shape Estimation from a Pressure Image using Synthetic Data

論文下載鏈接:https://arxiv.org/abs/2004.01166

代碼:https://github.com/Healthcare-Robotics/bodies-at-rest

數據集:https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/KOA4ML

FineGym: A Hierarchical Video Dataset for Fine-grained Action Understanding

主頁:https://sdolivia.github.io/FineGym/
論文:https://arxiv.org/abs/2004.06704

A Local-to-Global Approach to Multi-modal Movie Scene Segmentation

主頁:https://anyirao.com/projects/SceneSeg.html

論文下載鏈接:https://arxiv.org/abs/2004.02678

代碼:https://github.com/AnyiRao/SceneSeg

Deep Homography Estimation for Dynamic Scenes

論文:https://arxiv.org/abs/2004.02132

數據集:https://github.com/lcmhoang/hmg-dynamics

Assessing Image Quality Issues for Real-World Problems

主頁:https://vizwiz.org/tasks-and-datasets/image-quality-issues/
論文:https://arxiv.org/abs/2003.12511

UnrealText: Synthesizing Realistic Scene Text Images from the Unreal World

論文:https://arxiv.org/abs/2003.10608
代碼和數據集:https://github.com/Jyouhou/UnrealText/

PANDA: A Gigapixel-level Human-centric Video Dataset

論文:https://arxiv.org/abs/2003.04852

數據集:http://www.panda-dataset.com/

IntrA: 3D Intracranial Aneurysm Dataset for Deep Learning

論文:https://arxiv.org/abs/2003.02920
數據集:https://github.com/intra3d2019/IntrA

Cross-View Tracking for Multi-Human 3D Pose Estimation at over 100 FPS

論文:https://arxiv.org/abs/2003.03972
數據集:暫無

其他

CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus

論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Kluger_CONSAC_Robust_Multi-Model_Fitting_by_Conditional_Sample_Consensus_CVPR_2020_paper.html
代碼:https://github.com/fkluger/consac

Learning to Learn Single Domain Generalization

論文:https://arxiv.org/abs/2003.13216
代碼:https://github.com/joffery/M-ADA

Open Compound Domain Adaptation

主頁:https://liuziwei7.github.io/projects/CompoundDomain.html
數據集:https://drive.google.com/drive/folders/1_uNTF8RdvhS_sqVTnYx17hEOQpefmE2r?usp=sharing
論文:https://arxiv.org/abs/1909.03403
代碼:https://github.com/zhmiao/OpenCompoundDomainAdaptation-OCDA

Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

論文:http://www.cvlibs.net/publications/Niemeyer2020CVPR.pdf

代碼:https://github.com/autonomousvision/differentiable_volumetric_rendering

QEBA: Query-Efficient Boundary-Based Blackbox Attack

論文:https://arxiv.org/abs/2005.14137
代碼:https://github.com/AI-secure/QEBA

Equalization Loss for Long-Tailed Object Recognition

論文:https://arxiv.org/abs/2003.05176
代碼:https://github.com/tztztztztz/eql.detectron2

Instance-aware Image Colorization

主頁:https://ericsujw.github.io/InstColorization/
論文:https://arxiv.org/abs/2005.10825
代碼:https://github.com/ericsujw/InstColorization

Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting

論文:https://arxiv.org/abs/2005.09704

代碼:https://github.com/Atlas200dk/sample-imageinpainting-HiFill

Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching

論文:https://arxiv.org/abs/2005.03860
代碼:https://github.com/shiyujiao/cross_view_localization_DSM

Epipolar Transformers

論文:https://arxiv.org/abs/2005.04551

代碼:https://github.com/yihui-he/epipolar-transformers

Bringing Old Photos Back to Life

主頁:http://raywzy.com/Old_Photo/
論文:https://arxiv.org/abs/2004.09484

MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask

論文:https://arxiv.org/abs/2003.10955

代碼:https://github.com/microsoft/MaskFlownet

Self-Supervised Viewpoint Learning from Image Collections

論文:https://arxiv.org/abs/2004.01793
論文2:https://research.nvidia.com/sites/default/files/pubs/2020-03_Self-Supervised-Viewpoint-Learning/SSV-CVPR2020.pdf
代碼:https://github.com/NVlabs/SSV

Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations

Oral

論文:https://arxiv.org/abs/2003.12237

代碼:https://github.com/cuishuhao/BNM

Towards Learning Structure via Consensus for Face Segmentation and Parsing

論文:https://arxiv.org/abs/1911.00957
代碼:https://github.com/isi-vista/structure_via_consensus

Plug-and-Play Algorithms for Large-scale Snapshot Compressive Imaging

Oral

論文:https://arxiv.org/abs/2003.13654

代碼:https://github.com/liuyang12/PnP-SCI

Lightweight Photometric Stereo for Facial Details Recovery

論文:https://arxiv.org/abs/2003.12307
代碼:https://github.com/Juyong/FacePSNet

Footprints and Free Space from a Single Color Image

論文:https://arxiv.org/abs/2004.06376

代碼:https://github.com/nianticlabs/footprints

Self-Supervised Monocular Scene Flow Estimation

論文:https://arxiv.org/abs/2004.04143
代碼:https://github.com/visinf/self-mono-sf

Quasi-Newton Solver for Robust Non-Rigid Registration

論文:https://arxiv.org/abs/2004.04322
代碼:https://github.com/Juyong/Fast_RNRR

A Local-to-Global Approach to Multi-modal Movie Scene Segmentation

主頁:https://anyirao.com/projects/SceneSeg.html

論文下載鏈接:https://arxiv.org/abs/2004.02678

代碼:https://github.com/AnyiRao/SceneSeg

DeepFLASH: An Efficient Network for Learning-based Medical Image Registration

論文:https://arxiv.org/abs/2004.02097

代碼:https://github.com/jw4hv/deepflash

Self-Supervised Scene De-occlusion

主頁:https://xiaohangzhan.github.io/projects/deocclusion/
論文:https://arxiv.org/abs/2004.02788
代碼:https://github.com/XiaohangZhan/deocclusion

Polarized Reflection Removal with Perfect Alignment in the Wild

主頁:https://leichenyang.weebly.com/project-polarized.html
代碼:https://github.com/ChenyangLEI/CVPR2020-Polarized-Reflection-Removal-with-Perfect-Alignment

Background Matting: The World is Your Green Screen

論文:https://arxiv.org/abs/2004.00626
代碼:http://github.com/senguptaumd/Background-Matting

What Deep CNNs Benefit from Global Covariance Pooling: An Optimization Perspective

論文:https://arxiv.org/abs/2003.11241

代碼:https://github.com/ZhangLi-CS/GCP_Optimization

Look-into-Object: Self-supervised Structure Modeling for Object Recognition

論文:暫無
代碼:https://github.com/JDAI-CV/LIO

Video Object Grounding using Semantic Roles in Language Description

論文:https://arxiv.org/abs/2003.10606
代碼:https://github.com/TheShadow29/vognet-pytorch

Dynamic Hierarchical Mimicking Towards Consistent Optimization Objectives

論文:https://arxiv.org/abs/2003.10739
代碼:https://github.com/d-li14/DHM

SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization

論文:http://www.cs.umd.edu/~yuejiang/papers/SDFDiff.pdf
代碼:https://github.com/YueJiang-nj/CVPR2020-SDFDiff

On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location

論文:https://arxiv.org/abs/2003.07064

代碼:https://github.com/oskyhn/CNNs-Without-Borders

GhostNet: More Features from Cheap Operations

論文:https://arxiv.org/abs/1911.11907

代碼:https://github.com/iamhankai/ghostnet

AdderNet: Do We Really Need Multiplications in Deep Learning?

論文:https://arxiv.org/abs/1912.13200
代碼:https://github.com/huawei-noah/AdderNet

Deep Image Harmonization via Domain Verification

論文:https://arxiv.org/abs/1911.13239
代碼:https://github.com/bcmi/Image_Harmonization_Datasets

Blurry Video Frame Interpolation

論文:https://arxiv.org/abs/2002.12259
代碼:https://github.com/laomao0/BIN

Extremely Dense Point Correspondences using a Learned Feature Descriptor

論文:https://arxiv.org/abs/2003.00619
代碼:https://github.com/lppllppl920/DenseDescriptorLearning-Pytorch

Filter Grafting for Deep Neural Networks

論文:https://arxiv.org/abs/2001.05868
代碼:https://github.com/fxmeng/filter-grafting
論文解讀:https://www.zhihu.com/question/372070853/answer/1041569335

Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation

論文:https://arxiv.org/abs/2003.02824
代碼:https://github.com/cmhungsteve/SSTDA

Detecting Attended Visual Targets in Video

論文:https://arxiv.org/abs/2003.02501

代碼:https://github.com/ejcgt/attention-target-detection

Deep Image Spatial Transformation for Person Image Generation

論文:https://arxiv.org/abs/2003.00696
代碼:https://github.com/RenYurui/Global-Flow-Local-Attention

Rethinking Zero-shot Video Classification: End-to-end Training for Realistic Applications

論文:https://arxiv.org/abs/2003.01455
代碼:https://github.com/bbrattoli/ZeroShotVideoClassification

https://github.com/charlesCXK/3D-SketchAware-SSC

https://github.com/Anonymous20192020/Anonymous_CVPR5767

https://github.com/avirambh/ScopeFlow

https://github.com/csbhr/CDVD-TSP

https://github.com/ymcidence/TBH

https://github.com/yaoyao-liu/mnemonics

https://github.com/meder411/Tangent-Images

https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch

https://github.com/sjmoran/deep_local_parametric_filters

https://github.com/charlesCXK/3D-SketchAware-SSC

https://github.com/bermanmaxim/AOWS

https://github.com/dc3ea9f/look-into-object

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章