Audio-Driven Dubbing for User Generated Contents via Style-Aware Semi-Parametric Synthesis

被引:3
|
作者
Song, Linsen [1 ,2 ]
Wu, Wayne [3 ]
Fu, Chaoyou [1 ,2 ]
Loy, Chen Change [4 ]
He, Ran [1 ,2 ]
机构
[1] CAS Beijing, Ctr Res Intelligent Percept & Comp, Ctr Excellence Brain Sci & Intelligence Technol, Natl Lab Pattern Recognit,CASIA, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China
[3] SenseTime Res, Beijing 100080, Peoples R China
[4] Nanyang Technol Univ, S Lab, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Talking face generation; video generation; GAN; thin-plate spline;
D O I
10.1109/TCSVT.2022.3210002
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Existing automated dubbing methods are usually designed for Professionally Generated Content (PGC) production, which requires massive training data and training time to learn a person-specific audio-video mapping. In this paper, we investigate an audio-driven dubbing method that is more feasible for User Generated Content (UGC) production. There are two unique challenges to design a method for UGC: 1) the appearances of speakers are diverse and arbitrary as the method needs to generalize across users; 2) the available video data of one speaker are very limited. In order to tackle the above challenges, we first introduce a new Style Translation Network to integrate the speaking style of the target and the speaking content of the source via a cross-modal AdaIN module. It enables our model to quickly adapt to a new speaker. Then, we further develop a semi-parametric video renderer, which takes full advantage of the limited training data of the unseen speaker via a video-level retrieve-warp-refine pipeline. Finally, we propose a temporal regularization for the semi-parametric renderer, generating more continuous videos. Extensive experiments show that our method generates videos that accurately preserve various speaking styles, yet with considerably lower amount of training data and training time in comparison to existing methods. Besides, our method achieves a faster testing speed than most recent methods.
引用
收藏
页码:1247 / 1261
页数:15
相关论文
empty
未找到相关数据