Voice2Face: Audio-driven Facial and Tongue Rig Animations with cVAEs

被引:5
|
作者
Aylagas, Monica Villanueva [1 ]
Leon, Hector Anadon [1 ]
Teye, Mattias [1 ]
Tollmar, Konrad [1 ]
机构
[1] SEED Elect Arts EA, Redwood City, CA 94065 USA
关键词
Deep Learning; Facial animation; Tongue animation; Lip synchronization; Rig animation;
D O I
10.1111/cgf.14640
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We present Voice2Face: a Deep Learning model that generates face and tongue animations directly from recorded speech. Our approach consists of two steps: a conditional Variational Autoencoder generates mesh animations from speech, while a separate module maps the animations to rig controller space. Our contributions include an automated method for speech style control, a method to train a model with data from multiple quality levels, and a method for animating the tongue. Unlike previous works, our model generates animations without speaker-dependent characteristics while allowing speech style control. We demonstrate through a user study that Voice2Face significantly outperforms a comparative state-of-the-art model in terms of perceived animation quality, and our quantitative evaluation suggests that Voice2Face yields more accurate lip closure in speech with bilabials through our speech style optimization. Both evaluations also show that our data quality conditioning scheme outperforms both an unconditioned model and a model trained with a smaller high-quality dataset. Finally, the user study shows a preference for animations including tongue. Results from our model can be seen at .
引用
收藏
页码:255 / 265
页数:11
相关论文
共 31 条
  • [21] Facial expression GAN for voice-driven face generation
    Fang, Zheng
    Liu, Zhen
    Liu, Tingting
    Hung, Chih-Chieh
    Xiao, Jiangjian
    Feng, Guangjin
    VISUAL COMPUTER, 2022, 38 (03): : 1151 - 1164
  • [22] Leveraging Language Models and Audio-Driven Dynamic Facial Motion Synthesis: A New Paradigm in AI-Driven Interview Training
    Garg, Aakash
    Chaudhury, Rohan
    Godbole, Mihir
    Seo, Jinsil Hwaryoung
    ARTIFICIAL INTELLIGENCE IN EDUCATION: POSTERS AND LATE BREAKING RESULTS, WORKSHOPS AND TUTORIALS, INDUSTRY AND INNOVATION TRACKS, PRACTITIONERS, DOCTORAL CONSORTIUM AND BLUE SKY, AIED 2024, PT I, 2024, 2150 : 461 - 468
  • [23] Audio2Moves: Two-Level Hierarchical Framework for Audio-Driven Human Motion Synthesis
    Yanbo Cheng
    Nada Elmasry
    Yingying Wang
    SN Computer Science, 6 (5)
  • [24] CMFF-Face: Attention-Based Cross-Modal Feature Fusion for High-Quality Audio-Driven Talking Face Generation
    Zhao, Guangzhe
    Liu, Yanan
    Wang, Xueping
    Yan, Feihu
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 101 - 110
  • [25] META TALK: LEARNING TO DATA-EFFICIENTLY GENERATE AUDIO-DRIVEN LIP-SYNCHRONIZED TALKING FACE WITH HIGH DEFINITION
    Zhang, Yuhan
    He, Weihua
    Li, Minglei
    Tian, Kun
    Zhang, Ziyang
    Cheng, Jie
    Wang, Yaoyuan
    Liao, Jianxing
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4848 - 4852
  • [26] Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion
    Wang, Suzhen
    Li, Lincheng
    Ding, Yu
    Fan, Changjie
    Yu, Xin
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 1098 - 1105
  • [27] SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
    Zhang, Wenxuan
    Cun, Xiaodong
    Wang, Xuan
    Zhang, Yong
    Shen, Xi
    Guo, Yu
    Shan, Ying
    Wang, Fei
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8652 - 8661
  • [28] A Comparative Study of Four 3D Facial Animation Methods: Skeleton, Blendshape, Audio-Driven, and Vision-Based Capture
    Wei, Mingzhu
    Adamo, Nicoletta
    Giri, Nandhini
    Chen, Yingjie
    ARTSIT, INTERACTIVITY AND GAME CREATION, ARTSIT 2022, 2023, 479 : 36 - 50
  • [29] LPIPS-AttnWav2Lip: Generic audio-driven lip synchronization for talking head generation in the wild
    Chen, Zhipeng
    Wang, Xinheng
    Xie, Lun
    Yuan, Haijie
    Pan, Hang
    SPEECH COMMUNICATION, 2024, 157
  • [30] Wav2NeRF: Audio-driven realistic talking head generation via wavelet-based NeRF
    Shin, Ah-Hyung
    Lee, Jae-Ho
    Hwang, Jiwon
    Kim, Yoonhyung
    Park, Gyeong-Moon
    IMAGE AND VISION COMPUTING, 2024, 148