THIN: THrowable Information Networks and Application for Facial Expression Recognition in the Wild

被引:15
|
作者
Arnaud, Estephe [1 ]
Dapogny, Arnaud [2 ]
Bailly, Kevin [1 ]
机构
[1] Sorbonne Univ Paris, Inst Syst Intelligents & Robot, CNRS, ISIR, F-75005 Paris, France
[2] Datakalab, F-75017 Paris, France
关键词
Facial expression recognition; deep ensemble methods; disentangled representations; ADOLESCENTS; ENSEMBLE;
D O I
10.1109/TAFFC.2022.3144439
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For a number of machine learning problems, an exogenous variable can be identified such that it heavily influences the appearance of the different classes, and an ideal classifier should be invariant to this variable. An example of such exogenous variable is identity if facial expression recognition (FER) is considered. In this paper, we propose a dual exogenous/endogenous representation. The former captures the exogenous variable whereas the second one models the task at hand (e.g., facial expression). We design a prediction layer that uses a tree-gated deep ensemble conditioned by the exogenous representation. We also propose an exogenous dispelling loss to remove the exogenous information from the endogenous representation. Thus, the exogenous information is used two times in a throwable fashion, first as a conditioning variable for the target task, and second to create invariance within the endogenous representation. We call this method THIN, standing for THrowable Information Networks. We experimentally validate THIN in several contexts where an exogenous information can be identified, such as digit recognition under large rotations and shape recognition at multiple scales. We also apply it to FER with identity as the exogenous variable. We demonstrate that THIN significantly outperforms state-of-the-art approaches on several challenging datasets.
引用
收藏
页码:2336 / 2348
页数:13
相关论文
共 50 条
  • [41] Mutual Information-based Facial Expression Recognition'
    Hazar, Mliki
    Mohamed, Hammami
    Hanene, Ben-Abdallah
    SIXTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2013), 2013, 9067
  • [42] Facial Expression Recognition Using Deep Neural Networks
    Li, Junnan
    Lam, Edmund Y.
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST) PROCEEDINGS, 2015, : 263 - 268
  • [43] Deep neural networks for a facial expression recognition system
    Jmour, Nadia
    Zayen, Sehla
    Abdelkrim, Afef
    INNOVATIVE AND INTELLIGENT TECHNOLOGY-BASED SERVICES FOR SMART ENVIRONMENTS-SMART SENSING AND ARTIFICIAL INTELLIGENCE, 2021, : 134 - 141
  • [44] Robust Facial Expression Recognition With Generative Adversarial Networks
    Yao N.-M.
    Guo Q.-P.
    Qiao F.-C.
    Chen H.
    Wang H.-A.
    Zidonghua Xuebao/Acta Automatica Sinica, 2018, 44 (05): : 865 - 877
  • [45] Facial Expression Recognition With Multiscale Graph Convolutional Networks
    Rao, Tianrong
    Li, Jie
    Wang, Xiaoyu
    Sun, Yibo
    Chen, Hong
    IEEE MULTIMEDIA, 2021, 28 (02) : 11 - 19
  • [46] Static Facial Expression Recognition with Convolution Neural Networks
    Zhang, Feng
    Chen, Zhong
    Ouyang, Chao
    Zhang, Yifei
    MIPPR 2017: PATTERN RECOGNITION AND COMPUTER VISION, 2017, 10609
  • [47] Mobile Convolutional Neural Networks for Facial Expression Recognition
    Yoon, ChangRak
    Kim, DoHyun
    11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 1315 - 1317
  • [48] Facial expression recognition using constructive neural networks
    Ma, L
    Khorasani, K
    SIGNAL PROCESSING, SENSOR FUSION, AND TARGET RECOGNITION X, 2001, 4380 : 521 - 530
  • [49] Convolutional Neural Networks Models for Facial Expression Recognition
    Ramdhani, Burhanudin
    Djamal, Esmeralda C.
    Ilyas, Ridwan
    2018 INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT INFORMATICS (SAIN), 2018, : 96 - 101
  • [50] Human Facial Expression Recognition with Convolution Neural Networks
    Christou, Nikolaos
    Kanojiya, Nilam
    THIRD INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, 2019, 797 : 539 - 545