FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality

被引:53
|
作者
Thies, Justus [1 ]
Zollhofer, Michael [2 ]
Stamminger, Marc [3 ]
Theobalt, Christian [4 ]
Niessner, Matthias [1 ]
机构
[1] Tech Univ Munich, Dept Informat, Boltzmannstr 3, D-85748 Garching, Germany
[2] Stanford Univ, Dept Comp Sci, Comp Graph Lab, 353 Serra Mall, Stanford, CA 94305 USA
[3] Univ Erlangen Nurnberg, Lehrstuhl Informat 9, Cauerstr 11, D-91058 Erlangen, Germany
[4] Max Planck Inst Informat, Saarland Informat Campus,Campus E 1-4 Off 228, D-66123 Saarbrucken, Germany
来源
ACM TRANSACTIONS ON GRAPHICS | 2018年 / 37卷 / 02期
基金
欧洲研究理事会;
关键词
Face tracking; virtual reality; eye tracking; MODEL;
D O I
10.1145/3182644
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We propose FaceVR, a novel image-based method that enables video teleconferencing in VR based on self-reenactment. State-of-the-art face tracking methods in the VR context are focused on the animation of rigged 3D avatars (Li et al. 2015; Olszewski et al. 2016). Although they achieve good tracking performance, the results look cartoonish and not real. In contrast to these model-based approaches, FaceVR enables VR teleconferencing using an image-based technique that results in nearly photo-realistic outputs. The key component of FaceVR is a robust algorithm to perform realtime facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions or change gaze directions in the prerecorded target video. In a live setup, we apply these newly introduced algorithmic components.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Real-time Expression Transfer for Facial Reenactment
    Thies, Justus
    Zollhoefer, Michael
    Niessner, Matthias
    Valgaerts, Levi
    Stamminger, Marc
    Theobalt, Christian
    ACM TRANSACTIONS ON GRAPHICS, 2015, 34 (06):
  • [2] Mobile Real-Time Eye-Tracking for Gaze-Aware Security Surveillance Support Systems
    Marois, Alexandre
    Lafond, Daniel
    Vachon, Francois
    Harvey, Eric R.
    Martin, Bruno
    Tremblay, Sebastien
    INTELLIGENT HUMAN SYSTEMS INTEGRATION 2020, 2020, 1131 : 201 - 207
  • [3] Face2Face: Real-time facial reenactment
    Thies, Justus
    IT-INFORMATION TECHNOLOGY, 2019, 61 (2-3): : 143 - 146
  • [4] Real-time 3D Face Reconstruction and Gaze Tracking for Virtual Reality
    Chen, Shu-Yu
    Gao, Lin
    Lai, Yu-Kun
    Rosin, Paul L.
    Xia, Shihong
    25TH 2018 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES (VR), 2018, : 525 - 526
  • [5] Real-Time Facial Expression Recognition Based on Image Processing in Virtual Reality
    Qingzhen Gong
    Xuefang Liu
    Yongqiang Ma
    International Journal of Computational Intelligence Systems, 18 (1)
  • [6] Real-time facial and eye gaze tracking system
    Park, KR
    Kim, J
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2005, E88D (06): : 1231 - 1238
  • [7] 'Real-time' virtual reality and the limits of immersion
    Misek, Richard
    SCREEN, 2020, 61 (04) : 615 - 624
  • [8] Real-Time Context Aware Audio Augmented Reality
    Arvanitis, Gerasimos
    Moustakas, Konstantinos
    Fakotakis, Nikos
    SPEECH AND COMPUTER (SPECOM 2015), 2015, 9319 : 333 - 340
  • [9] ReenactNet: Real-time Full Head Reenactment
    Koujan, Mohammad Rami
    Doukas, Michail Christos
    Roussos, Anastasios
    Zafeiriou, Stefanos
    2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), 2020, : 918 - 918
  • [10] Real-Time Recognition of Facial Expressions Using Facial Electromyograms Recorded Around the Eyes for Social Virtual Reality Applications
    Cha, Ho-Seung
    Choi, Seong-Jun
    Im, Chang-Hwan
    IEEE ACCESS, 2020, 8 : 62065 - 62075