Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection

被引:0
|
作者
Zhang, Zhenyu [1 ]
Ge, Yanhao
Chen, Renwang
Tai, Ying
Yan, Yan
Yang, Jian
Wang, Chengjie
Li, Jilin
Huang, Feiyue
机构
[1] Tencent Youtu Lab, Shanghai, Peoples R China
关键词
MORPHABLE MODEL; SHAPE;
D O I
10.1109/CVPR46437.2021.01399
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions. While plausible facial details are predicted, the models tend to over-depend on local color appearance and suffer from ambiguous noise. To address such problem, this paper presents a novel Learning to Aggregate and Personalize (LAP) framework for unsupervised robust 3D face modeling. Instead of using controlled environment, the proposed method implicitly disentangles ID-consistent and scenespecific face from unconstrained photo set. Specifically, to learn ID-consistent face, LAP adaptively aggregates intrinsic face factors of an identity based on a novel curriculum learning approach with relaxed consistency loss. To adapt the face for a personalized scene, we propose a novel attribute-refining network to modify ID-consistent face with target attribute and details. Based on the proposed method, we make unsupervised 3D face modeling benefit from meaningful image facial structure and possibly higher resolutions. Extensive experiments on benchmarks show LAP recovers superior or competitive face shape and texture, compared with state-of-the-art (SOTA) methods with or without prior and supervision.
引用
下载
收藏
页码:14209 / 14219
页数:11
相关论文
共 50 条
  • [1] Learning to Restore 3D Face from In-the-Wild Degraded Images
    Zhang, Zhenyu
    Ge, Yanhao
    Tai, Ying
    Huang, Xiaoming
    Wang, Chengjie
    Tang, Hao
    Huang, Dongjin
    Xie, Zhifeng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4227 - 4237
  • [2] On Learning 3D Face Morphable Mode from In-the-Wild Images
    Tran, Luan
    Liu, Xiaoming
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (01) : 157 - 171
  • [3] Learning an Animatable Detailed 3D Face Model from In-The-Wild Images
    Feng, Yao
    Feng, Haiwen
    Black, Michael J.
    Bolkart, Timo
    ACM TRANSACTIONS ON GRAPHICS, 2021, 40 (04):
  • [4] 3D Face Morphable Models "In-the-Wild"
    Booth, James
    Antonakos, Epameinondas
    Ploumpis, Stylianos
    Trigeorgis, George
    Panagakis, Yannis
    Zafeiriou, Stefanos
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5464 - 5473
  • [5] Learning directly from synthetic point clouds for "in-the-wild" 3D face recognition
    Zhang, Ziyu
    Da, Feipeng
    Yu, Yi
    PATTERN RECOGNITION, 2022, 123
  • [6] Learning Free-Form Deformation for 3D Face Reconstruction from In-The-Wild Images
    Jung, Harim
    Oh, Myeong-Seok
    Lee, Seong-Whan
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 2737 - 2742
  • [7] Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes
    Choi, Hongsuk
    Moon, Gyeongsik
    Park, JoonKyu
    Lee, Kyoung Mu
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1465 - 1474
  • [8] Synthesising 3D Facial Motion from "In-the-Wild" Speech
    Tzirakis, Panagiotis
    Papaioannou, Athanasios
    Lattas, Alexandros
    Tarasiou, Michail
    Schuller, Bjoern
    Zafeiriou, Stefanos
    2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), 2020, : 265 - 272
  • [9] Learning to regulate 3D head shape by removing occluding hair from in-the-wild images
    Anisetty, Sohan
    Saravanabavan, Varsha
    Cai Yiyu
    2022 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT (ISMAR-ADJUNCT 2022), 2022, : 403 - 408
  • [10] Real-Time 3D Face Fitting and Texture Fusion on In-the-Wild Videos
    Huber, Patrik
    Kopp, Philipp
    Christmas, William
    Raetsch, Matthias
    Kittler, Josef
    IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (04) : 437 - 441