Discriminant Distribution-Agnostic Loss for Facial Expression Recognition in the Wild

被引:47
|
作者
Farzaneh, Amir Hossein [1 ]
Qi, Xiaojun [1 ]
机构
[1] Utah State Univ, Dept Comp Sci, Logan, UT 84322 USA
关键词
D O I
10.1109/CVPRW50498.2020.00211
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Facial Expression Recognition (FER) has demonstrated remarkable progress due to the advancement of deep Convolutional Neural Networks (CNNs). FER's goal as a visual recognition problem is to learn a mapping from the facial embedding space to a set of fixed expression categories using a supervised learning algorithm. Softmax loss as the de facto standard in practice fails to learn discriminative features for efficient learning. Center loss and its variants as promising solutions increase deep feature discriminability in the embedding space and enable efficient learning. They fundamentally aim to maximize intra-class similarity and inter-class separation in the embedding space. However, center loss and its variants ignore the underlying extreme class imbalance in challenging wild FER datasets. As a result, they lead to a separation bias toward majority classes and leave minority classes overlapped in the embedding space. In this paper, we propose a novel Discriminant Distribution-Agnostic loss (DDA loss) to optimize the embedding space for extreme class imbalance scenarios. Specifically, DDA loss enforces inter-class separation of deep features for both majority and minority classes. Any CNN model can be trained with the DDA loss to yield well separated deep feature clusters in the embedding space. We conduct experiments on two popular large-scale wild FER datasets (RAF-DB and AffectNet) to show the discriminative power of the proposed loss function.
引用
收藏
页码:1631 / 1639
页数:9
相关论文
共 50 条
  • [21] Distribution-Agnostic Linear Unbiased Estimation With Saturated Weights for Heterogeneous Data
    Grassi, Francesco
    Coluccia, Angelo
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 2910 - 2926
  • [22] Facial Expression Recognition for In-the-wild Videos
    Liu, Hanyu
    Zeng, Jiabei
    Shan, Shiguang
    2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), 2020, : 615 - 618
  • [23] Facial expression recognition using local fisher discriminant analysis
    Zhang, Shiqing
    Zhao, Xiaoming
    Lei, Bicheng
    Communications in Computer and Information Science, 2011, 214 CCIS (PART 1): : 443 - 448
  • [24] Kernel modified quadratic discriminant function for facial expression recognition
    Yang, Duan-Duan
    Jin, Lian-Wen
    Yin, Jun-Xun
    Zhen, Li-Xin
    Huang, Jian-Cheng
    ADVANCES IN MACHINE VISION, IMAGE PROCESSING, AND PATTERN ANALYSIS, 2006, 4153 : 66 - 75
  • [25] ORTHOGONAL DISCRIMINANT NEIGHBORHOOD PRESERVING EMBEDDING FOR FACIAL EXPRESSION RECOGNITION
    Liu, Shuai
    Ruan, Qiuqi
    Ni, Rongrong
    2010 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2010, : 2757 - 2760
  • [26] Facial Expression Recognition Using Local Fisher Discriminant Analysis
    Zhang, Shiqing
    Zhao, Xiaoming
    Lei, Bicheng
    ADVANCES IN COMPUTER SCIENCE, ENVIRONMENT, ECOINFORMATICS, AND EDUCATION, PT I, 2011, 214 : 443 - +
  • [27] Regularized Neighborhood Boundary Discriminant Analysis for Facial Expression Recognition
    Wang, Zhan
    Ruan, Qiuqi
    Liu, Shuai
    Guo, Song
    2011 IET 4TH INTERNATIONAL CONFERENCE ON WIRELESS, MOBILE & MULTIMEDIA NETWORKS (ICWMMN 2011), 2011, : 248 - 252
  • [28] Facial expression recognition using fuzzy kernel discriminant analysis
    Wu, Qingjiang
    Zhou, Xiaoyan
    Zheng, Wenming
    FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, PROCEEDINGS, 2006, 4223 : 780 - 783
  • [29] Label distribution learning for compound facial expression recognition in-the-wild: A comparative study
    Khelifa, Afifa
    Ghazouani, Haythem
    Barhoumi, Walid
    EXPERT SYSTEMS, 2025, 42 (02)
  • [30] Self-Paced Label Distribution Learning for In-The-Wild Facial Expression Recognition
    Shao, Jianjian
    Wu, Zhenqian
    Luo, Yuanyan
    Huang, Shudong
    Pu, Xiaorong
    Ren, Yazhou
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022,