曹晨:
https://sites.google.com/site/zjucaochen/home
lihao :
http://www.hao-li.com/Hao_Li/Hao_Li_-_about_me.html
很多臉的論文:
http://39.105.183.104/similar/steadiface_realtime_facecentric_stabilization_on_mobile_phones
——————————————————————————————————————————————————————
浙大曹晨(周昆)
Chen Cao, Hongzhi Wu, Yanlin Weng, Tianjia Shao, Kun Zhou, 'Real-time Facial Animation with Image-based Dynamic Avatars', ACM Trans. on Graphics (SIGGRAPH), 2016. --70(引用量)
Chen Cao, Derek Bradley, Kun Zhou, Thabo Beeler, 'Real-Time High-Fidelity Facial Performance Capture
', ACM Trans. on Graphics (SIGGRAPH), 2015. ---139
Chen Cao, Qiming Hou, Kun Zhou, 'Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation', ACM Trans. on Graphics (SIGGRAPH), 2014. ---235
(http://ld99.top/2015/08/14/DDEregression)
Chen Cao, Yanlin Weng, Stephen Lin, Kun Zhou, '3D Shape Regression for Real-time Facial Animation', ACM Trans. on Graphics (SIGGRAPH), 2013.----286
(應用在手機上,需要三維深度攝像頭)
Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong and Kun Zhou, 'FaceWarehouse: a 3D Facial Expression Database for Visual Computing', IEEE Transactions on Visualization & Computer Graphics, 20(3): 413-425, 2014.---377(數據庫)
Yanlin Weng, Chen Cao, Qiming Hou and Kun Zhou, 'Real-time Facial Animation on Mobile Devices', Graphical Models, 76(3): 172-179, 2014.---15
Chen Cao, Zhong Ren, Baining Guo and Kun Zhou, 'Interactive Rendering of Non-Constant, Refractive Media using the Ray Equations of Gradient-Index Optics', Computer Graphics Forum (EGSR), 2010.
_____________________________________________________________________
Thies J, Zollhofer M, Stamminger M, et al. Face2face: Real-time face capture and reenactment of rgb videos[C] .CVPR. 2016: 2387-2395. ----296
(表情遷移)
————————————————————————————————————————————————————
引用Displaced Dynamic Expression Regression
Thies J, Zollhöfer M, Nießner M, et al. Real-time expression transfer for facial reenactment[J]. ACM Trans. Graph., 2015, 34(6): 183:1-183:14.
--136
Jeni L A, Cohn J F, Kanade T. Dense 3D face alignment from 2D videos in real-time[C]//2015 11th IEEE international conference and workshops on automatic face and gesture recognition (FG). IEEE, 2015, 1: 1-8.
--111
Ichim A E, Bouaziz S, Pauly M. Dynamic 3D avatar creation from hand-held video input[J]. ACM Transactions on Graphics (ToG), 2015, 34(4): 45.
--101
Hsieh P L, Ma C, Yu J, et al. Unconstrained realtime facial performance capture[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 1675-1683.
--78
Saito S, Li T, Li H. Real-time facial segmentation and performance capture from rgb input[C]//European Conference on Computer Vision. Springer, Cham, 2016: 244-261.
--74
————————————————————————————————————————————
Speech-driven 3D Facial Animation with Implicit Emotional Awareness: A Deep Learning Approach
(from http://www.research.cs.rutgers.edu/~hxp1/publications.html)
————————————————————————————————————————
SIGGRAPH2017 speech and facial animation 專場
相關paper:
speech-driven facial animation, project homepage, code
Video-audio Driven Real-Time Facial Animation
Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion
Example-Based Synthesis of Stylized Facial Animations, 可交互
Synthesizing Obama: Learning Lip Sync from Audio,code
——————————————————————————————————————
有源代碼的
2018
CVPR oral paper
給定一張輸入圖像作爲目標物體,同時還輸入一個驅動視頻序列用於描述一個運動物體,我們的框架能生成這樣一個視頻:目標對象能根據驅動序列運動起來。
(視頻中物體的動作遷移到一張圖片上,生成一個圖片上物體做該動作的視頻)
幾個月前放的code
- GANimation Anatomically-aware Facial Animation from a Single Image
GANimation的ECCV 2018的Oral論文可以從單張圖像和表情編碼生成連續的表情動畫
- StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
StarGAN是CVPR2018最新提出來的,用於多領域的圖像遷移學習。