AnimeCeleb: Large-Scale Animation CelebHeads Dataset for Head Reenactment

被引:4
|
作者
Kim, Kangyeol [1 ,4 ]
Park, Sunghyun [1 ]
Lee, Jaeseong [1 ]
Chung, Sunghyo [2 ]
Lee, Junsoo [3 ]
Choo, Jaegul [1 ,4 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
[2] Korea Univ, Seoul, South Korea
[3] Naver Webtoon, Seongnam Si, South Korea
[4] Letsur Inc, Seongnam Si, South Korea
来源
关键词
Animation dataset; Head reenactment; Cross-domain;
D O I
10.1007/978-3-031-20074-8_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a novel Animation CelebHeads dataset (AnimeCeleb) to address an animation head reenactment. Different from previous animation head datasets, we utilize a 3D animation models as the controllable image samplers, which can provide a large amount of head images with their corresponding detailed pose annotations. To facilitate a data creation process, we build a semi-automatic pipeline leveraging an open 3D computer graphics software with a developed annotation system. After training with the AnimeCeleb, recent head reenactment models produce high-quality animation head reenactment results, which are not achievable with existing datasets. Furthermore, motivated by metaverse application, we propose a novel pose mapping method and architecture to tackle a cross-domain head reenactment task. During inference, a user can easily transfer one's motion to an arbitrary animation head. Experiments demonstrate an usefulness of the AnimeCeleb to train animation head reenactment models, and the superiority of our crossdomain head reenactment model compared to state-of-the-art methods. Our dataset and code are available at this url.
引用
收藏
页码:414 / 430
页数:17
相关论文
共 50 条
  • [41] USED: A Large-scale Social Event Detection Dataset
    Ahmad, Kashif
    Conci, Nicola
    Boato, Giulia
    De Natale, Francesco G. B.
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON MULTIMEDIA SYSTEMS (MMSYS'16), 2016, : 380 - 385
  • [42] VGGSOUND: A LARGE-SCALE AUDIO-VISUAL DATASET
    Chen, Honglie
    Xie, Weidi
    Vedaldi, Andrea
    Zisserman, Andrew
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 721 - 725
  • [43] WAID: A Large-Scale Dataset for Wildlife Detection with Drones
    Mou, Chao
    Liu, Tengfei
    Zhu, Chengcheng
    Cui, Xiaohui
    APPLIED SCIENCES-BASEL, 2023, 13 (18):
  • [44] Nostalgia on Twitter: Detection and Analysis of a Large-Scale Dataset
    Stanley Jothiraj, Fiona Victoria
    Hong, Lingzi
    Mashhadi, Afra
    Proceedings of the Association for Information Science and Technology, 2024, 61 (01) : 349 - 360
  • [45] PetFace: A Large-Scale Dataset and Benchmark for Animal Identification
    Shinoda, Risa
    Shiohara, Kaede
    COMPUTER VISION-ECCV 2024, PT XVIII, 2025, 15076 : 19 - 36
  • [46] Recovering large-scale battery aging dataset with machine
    Tang, Xiaopeng
    Liu, Kailong
    Li, Kang
    Widanage, Widanalage Dhammika
    Kendrick, Emma
    Gao, Furong
    PATTERNS, 2021, 2 (08):
  • [47] SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation
    Jiang, Junfeng
    Dong, Chengzhang
    Kurohashi, Sadao
    Aizawa, Akiko
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 4086 - 4101
  • [48] MARVEL: A Large-Scale Image Dataset for Maritime Vessels
    Gundogdu, Erhan
    Solmaz, Berkan
    Yucesoy, Veysel
    Koc, Aykut
    COMPUTER VISION - ACCV 2016, PT V, 2017, 10115 : 165 - 180
  • [49] MobileRec: A Large-Scale Dataset for Mobile Apps Recommendation
    Maqbool, M. H.
    Farooq, Umar
    Mosharrof, Adib
    Siddique, A. B.
    Foroosh, Hassan
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 3007 - 3016
  • [50] KASANDR: A Large-Scale Dataset with Implicit Feedback for Recommendation
    Sidana, Sumit
    Laclau, Charlotte
    Amini, Massih R.
    Vandelle, Gilles
    Bois-Crettez, Andre
    SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2017, : 1245 - 1248