Discriminant Distribution-Agnostic Loss for Facial Expression Recognition in the Wild

被引:47
|
作者
Farzaneh, Amir Hossein [1 ]
Qi, Xiaojun [1 ]
机构
[1] Utah State Univ, Dept Comp Sci, Logan, UT 84322 USA
关键词
D O I
10.1109/CVPRW50498.2020.00211
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Facial Expression Recognition (FER) has demonstrated remarkable progress due to the advancement of deep Convolutional Neural Networks (CNNs). FER's goal as a visual recognition problem is to learn a mapping from the facial embedding space to a set of fixed expression categories using a supervised learning algorithm. Softmax loss as the de facto standard in practice fails to learn discriminative features for efficient learning. Center loss and its variants as promising solutions increase deep feature discriminability in the embedding space and enable efficient learning. They fundamentally aim to maximize intra-class similarity and inter-class separation in the embedding space. However, center loss and its variants ignore the underlying extreme class imbalance in challenging wild FER datasets. As a result, they lead to a separation bias toward majority classes and leave minority classes overlapped in the embedding space. In this paper, we propose a novel Discriminant Distribution-Agnostic loss (DDA loss) to optimize the embedding space for extreme class imbalance scenarios. Specifically, DDA loss enforces inter-class separation of deep features for both majority and minority classes. Any CNN model can be trained with the DDA loss to yield well separated deep feature clusters in the embedding space. We conduct experiments on two popular large-scale wild FER datasets (RAF-DB and AffectNet) to show the discriminative power of the proposed loss function.
引用
收藏
页码:1631 / 1639
页数:9
相关论文
共 50 条
  • [1] Towards Distribution-Agnostic Generalized Category Discovery
    Bai, Jianhong
    Liu, Zuozhu
    Wang, Hualiang
    Chen, Ruizhe
    Mu, Lianrui
    Li, Xiaomeng
    Zhou, Joey Tianyi
    Feng, Yang
    Wu, Jian
    Hu, Haoji
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Distribution-Agnostic Probabilistic Few-Shot Learning for Multimodal Recognition and Prediction
    Wang, Di
    Xian, Xiaochen
    Li, Haidong
    Wang, Dong
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 1941 - 1957
  • [3] Distribution-Agnostic Probabilistic Few-Shot Learning for Multimodal Recognition and Prediction
    Wang, Di
    Xian, Xiaochen
    Li, Haidong
    Wang, Dong
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 1941 - 1957
  • [4] Separate Loss for Basic and Compound Facial Expression Recognition in the Wild
    Li, Yingjian
    Lu, Yao
    Li, Jinxing
    Lu, Guangming
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 897 - 911
  • [5] Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids
    Baker, Kyri
    Dall'Anese, Emiliano
    Summers, Tyler
    2016 NORTH AMERICAN POWER SYMPOSIUM (NAPS), 2016,
  • [6] Facial Expression Recognition in the Wild via Deep Attentive Center Loss
    Farzaneh, Amir Hossein
    Qi, Xiaojun
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 2401 - 2410
  • [7] Effective attention feature reconstruction loss for facial expression recognition in the wild
    Weijun Gong
    Yingying Fan
    Yurong Qian
    Neural Computing and Applications, 2022, 34 : 10175 - 10187
  • [8] Intensity-Aware Loss for Dynamic Facial Expression Recognition in the Wild
    Li, Hanting
    Niu, Hongjing
    Zhu, Zhaoqing
    Zhao, Feng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 67 - +
  • [9] Effective attention feature reconstruction loss for facial expression recognition in the wild
    Gong, Weijun
    Fan, Yingying
    Qian, Yurong
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (12): : 10175 - 10187
  • [10] Discriminant Graph Structures for Facial Expression Recognition
    Zafeiriou, Stefanos
    Pitas, Ioannis
    IEEE TRANSACTIONS ON MULTIMEDIA, 2008, 10 (08) : 1528 - 1540