Deep-Learning-Based Facial Retargeting Using Local Patches

被引:0
|
作者
Choi, Yeonsoo [1 ]
Lee, Inyup [2 ]
Cha, Sihun [2 ]
Kim, Seonghyeon [3 ]
Jung, Sunjin [2 ]
Noh, Junyong [2 ]
机构
[1] Netmarble F&C, Seoul, South Korea
[2] Korea Adv Inst Sci & Technol, Visual Media Lab, Daejeon, South Korea
[3] Anigma, Daejeon, South Korea
关键词
animation; facial animation; motion capture; motion transfer; image and video processing;
D O I
10.1111/cgf.15263
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch-based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re-enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to stylized characters with considerable variations in facial feature proportion.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Deep Learning-Based Unsupervised Human Facial Retargeting
    Kim, Seonghyeon
    Jung, Sunjin
    Seo, Kwanggyoon
    Blanco i Ribera, Roger
    Noh, Junyong
    COMPUTER GRAPHICS FORUM, 2021, 40 (07) : 45 - 55
  • [2] A deep-learning-based facial expression recognition method using textural features
    Mukhopadhyay, Moutan
    Dey, Aniruddha
    Kahali, Sayan
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (09): : 6499 - 6514
  • [3] A deep-learning-based facial expression recognition method using textural features
    Moutan Mukhopadhyay
    Aniruddha Dey
    Sayan Kahali
    Neural Computing and Applications, 2023, 35 : 6499 - 6514
  • [4] Facial Micro-Expression Generation based on Deep Motion Retargeting and Transfer Learning
    Fan, Xinqi
    Shahid, Ali Raza
    Yan, Hong
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4735 - 4739
  • [5] Deep-Learning-Based Stress Recognition with Spatial-Temporal Facial Information
    Jeon, Taejae
    Bae, Han Byeol
    Lee, Yongju
    Jang, Sungjun
    Lee, Sangyoun
    SENSORS, 2021, 21 (22)
  • [6] Deep-Learning-Based Morphological Feature Segmentation for Facial Skin Image Analysis
    Yoon, Huisu
    Kim, Semin
    Lee, Jongha
    Yoo, Sangwook
    DIAGNOSTICS, 2023, 13 (11)
  • [7] Deep-learning-based automatic facial bone segmentation using a two-dimensional U-Net
    Morita, D.
    Mazen, S.
    Tsujiko, S.
    Otake, Y.
    Sato, Y.
    Numajiri, T.
    INTERNATIONAL JOURNAL OF ORAL AND MAXILLOFACIAL SURGERY, 2023, 52 (07) : 787 - 792
  • [8] Deep-learning-based quantum imaging using NOON states
    Li, Fengrong
    Sun, Yifan
    Zhang, XiangDong
    JOURNAL OF PHYSICS COMMUNICATIONS, 2022, 6 (03):
  • [9] Deep-learning-based ghost imaging
    Meng Lyu
    Wei Wang
    Hao Wang
    Haichao Wang
    Guowei Li
    Ni Chen
    Guohai Situ
    Scientific Reports, 7
  • [10] Deep-learning-based ghost imaging
    Lyu, Meng
    Wang, Wei
    Wang, Hao
    Wang, Haichao
    Li, Guowei
    Chen, Ni
    Situ, Guohai
    SCIENTIFIC REPORTS, 2017, 7