Deep domain-invariant learning for facial age estimation

被引:3
|
作者
Bao, Zenghao [1 ,2 ,3 ]
Luo, Yutian [4 ]
Tan, Zichang [1 ,2 ,3 ]
Wan, Jun [1 ,2 ,3 ]
Ma, Xibo [1 ,2 ,3 ]
Lei, Zhen [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Inst Automat, CBSR, Beijing, Peoples R China
[2] Chinese Acad Sci, Inst Automat, NLPR, Beijing, Peoples R China
[3] Univ Chinese Acad Sci, Beijing, Peoples R China
[4] Macau Univ Sci & Technol, Macau, Peoples R China
关键词
Deep learning; Facial age estimation; Domain generalization;
D O I
10.1016/j.neucom.2023.02.037
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous studies in facial age estimation can achieve promising performance when the training and test sets have a similar condition. However, these methods often fail to maintain performance and show sig-nificant degradation when encountering unseen domains. Therefore, we propose a novel method named Deep Domain-Invariant Learning (DDIL) to solve the Out-of-Distribution (OOD) generalization problem for facial age estimation. The proposed DDIL consists of the domain-invariant and style-invariant mod-ules. The former extracts domain-specific features and trains a domain-invariant feature extractor by reducing the covariance discrepancy among features from different domains, while the latter leverages style randomization to overcome CNN's induction bias towards styles. Consolidating these two modules, our DDIL can effectively decrease the influence of domain discrepancy. Extensive experiments on multi-ple age benchmark datasets under the Leave-One-Domain-Out Cross-Validation setting indicate superior performance in tackling age estimation generalization.(c) 2023 Published by Elsevier B.V.
引用
收藏
页码:86 / 93
页数:8
相关论文
共 50 条
  • [1] A Dictionary Approach to Domain-Invariant Learning in Deep Networks
    Wang, Ze
    Cheng, Xiuyuan
    Sapiro, Guillermo
    Qiu, Qiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [2] DIVIDE: Learning a Domain-Invariant Geometric Space for Depth Estimation
    Shim, Dongseok
    Kim, H. Jin
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (05) : 4663 - 4670
  • [3] Domain-Invariant Feature Learning for Domain Adaptation
    Tu, Ching-Ting
    Lin, Hsiau-Wen
    Lin, Hwei Jen
    Tokuyama, Yoshimasa
    Chu, Chia-Hung
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2023, 37 (03)
  • [4] ATTENTIVE ADVERSARIAL LEARNING FOR DOMAIN-INVARIANT TRAINING
    Meng, Zhong
    Li, Jinyu
    Gong, Yifan
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6740 - 6744
  • [5] LEARNING DOMAIN-INVARIANT TRANSFORMATION FOR SPEAKER VERIFICATION
    Zhang, Hanyi
    Wang, Longbiao
    Lee, Kong Aik
    Liu, Meng
    Dang, Jianwu
    Chen, Hui
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7177 - 7181
  • [6] Domain-invariant representation learning using an unsupervised domain adversarial adaptation deep neural network
    Jia, Xibin
    Jin, Ya
    Su, Xing
    Hu, Yongli
    NEUROCOMPUTING, 2019, 355 : 209 - 220
  • [7] Learning Domain-Invariant Representations of Histological Images
    Lafarge, Maxime W.
    Pluim, Josien P. W.
    Eppenhof, Koen A. J.
    Veta, Mitko
    FRONTIERS IN MEDICINE, 2019, 6
  • [8] Gradient-aware domain-invariant learning for domain generalization
    Hou, Feng
    Zhang, Yao
    Liu, Yang
    Yuan, Jin
    Zhong, Cheng
    Zhang, Yang
    Shi, Zhongchao
    Fan, Jianping
    He, Zhiqiang
    MULTIMEDIA SYSTEMS, 2025, 31 (01)
  • [9] Graph-Diffusion-Based Domain-Invariant Representation Learning for Cross-Domain Facial Expression Recognition
    Wang, Run
    Song, Peng
    Zheng, Wenming
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (03) : 4163 - 4174
  • [10] On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources
    Trung Phung
    Trung Le
    Long Vuong
    Toan Tran
    Anh Tran
    Bui, Hung
    Dinh Phung
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34