Real-time Expressive Avatar Animation Generation based on Monocular Videos

被引:4
|
作者
Song, Wenfeng [1 ]
Wang, Xianfei [1 ]
Gao, Yang [2 ]
Hao, Aimin [3 ,4 ]
Hou, Xia [1 ]
机构
[1] Beijing Informat Sci & Technol Univ, Comp Sch, Beijing, Peoples R China
[2] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing Adv Innovat Ctr Biomed Engn, Beijing, Peoples R China
[3] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Computing methodologies-Computer graphics-Animation-Motion capture; Human-centered computing-Human computer interaction (HCI)-Interactive systems and tools;
D O I
10.1109/ISMAR-Adjunct57072.2022.00092
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The technologies for generating real-time animated avatars are very useful in the fields of VR/AR animation and entertainment. Most of the existing studies, however, always require the technology of time-consuming motion capture at high cost. This paper proposes an efficient lightweight framework of dynamic avatar animation, which can generate all the facial expressions, gestures, and torso movements properly in real time. The entire technique is driven only by monocular camera videos. Specifically, the 3D posture and facial landmarks of the monocular videos can be calculated by using Blaze-pose key points in our proposed framework. Then, a novel adaptor mapping function is proposed to transform the kinematic topology into the rigid skeletons of avatars. Without the dependency of a high-cost motion capture instrument and also without the limitation of the topology, our approach produces avatar animations with a higher level of fidelity. Finally, animations, including lip movements, facial expressions, and limb motions, are generated in a unified framework, which allows our 3D virtual avatar to act exactly like a real person. We have conducted extensive experiments to demonstrate the efficacy of applications in real-time avatar-related research. Our project and software are publicly available for further research or practical use (https://github.com/xianfei/SysMocap/).
引用
下载
收藏
页码:429 / 434
页数:6
相关论文
共 50 条
  • [41] Real-Time Facial Character Animation
    Tasli, H. Emrah
    den Uyl, Tim M.
    Boujut, Hugo
    Zaharia, Titus
    2015 11TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), VOL. 1, 2015,
  • [42] REAL-TIME ANIMATION OF CONSTRUCTION ACTIVITIES
    CLEVELAND, AB
    EXCELLENCE IN THE CONSTRUCTED PROJECT, 1989, : 238 - 243
  • [43] Physically Based Area Lighting Model for Real-Time Animation
    Olejnik, Michal
    Szajerman, Dominik
    Napieralski, Piotr
    COMPUTER VISION AND GRAPHICS, ICCVG 2016, 2016, 9972 : 73 - 85
  • [44] Real-time cartoon animation of smoke
    He, HT
    Xu, DQ
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2005, 16 (3-4) : 441 - 449
  • [45] Physically based model for real-time animation of curtain movement
    Zhejiang Univ, Hangzhou, China
    Ruan Jian Xue Bao/Journal of Software, 2000, 11 (09): : 1228 - 1236
  • [46] Real-time Decentralized Monocular SLAM
    Bresson, Guillaume
    Aufrere, Romuald
    Chapuis, Roland
    2012 12TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS & VISION (ICARCV), 2012, : 1018 - 1023
  • [47] Stream-based animation of real-time crowd scenes
    Lister, Wayne Daniel
    Day, Andy
    COMPUTERS & GRAPHICS-UK, 2012, 36 (06): : 651 - 657
  • [48] Real-time monocular object SLAM
    Galvez-Lopez, Dorian
    Salas, Marta
    Tardos, Juan D.
    Montiel, J. M. M.
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2016, 75 : 435 - 449
  • [49] Real-Time Animation of Cloth Based on Rigid Rods Model
    姜延
    王瑞
    Journal of Donghua University(English Edition), 2007, (02) : 197 - 200
  • [50] Physically-based real-time animation of draped cloth
    Cheng, CY
    Xu, YG
    Shi, JY
    Shum, HY
    COMPUTER GRAPHICS INTERNATIONAL 2001, PROCEEDINGS, 2001, : 257 - 264