An Out-of-Distribution Attack Resistance Approach to Emotion Categorization

被引:5
|
作者
Shehu H.A. [1 ]
Browne W.N. [2 ]
Eisenbarth H. [3 ]
机构
[1] The School of Engineering and Computer Science, Victoria University of Wellington, Wellington
[2] The School of Electrical Engineering and Robotics, Queensland University of Technology, Brisbane
[3] The School of Psychology, Victoria University of Wellington, Wellington
来源
关键词
Attack; cross-database; emotion categorization; emotion recognition; facial expression; facial landmarks; out-of-distribution;
D O I
10.1109/TAI.2021.3105371
中图分类号
学科分类号
摘要
Deep neural networks are a powerful model for feature extraction. They produce features that enable state-of-the-art performance on many tasks, including emotion categorization. However, their homogeneous representation of knowledge has made them prone to attacks, i.e., small modification in train or test data to mislead the models. Emotion categorization can usually be performed to be either in-distribution (train and test with the same dataset) or out-of-distribution (train on one or more dataset(s) and test on a different dataset). Our already developed landmark-based technique, which is robust for in-distribution improvement against attacks in emotion categorization, could translate to out-of-distribution classification problems. This is important as different databases might have different variations such as in color or level of expressiveness of emotion. We compared the landmark-based method with four state-of-the-art deep models (EfficientNetB0, InceptionV3, ResNet50, and VGG19), as well as emotion categorization tools (i.e., Python Facial Expression Analysis Toolbox and the Microsoft Azure face application programming interface) by performing a cross-database experiment across six commonly used databases, i.e., extended Cohn-Kanade, Japanese female facial expression, Karolinska directed emotional faces, National Institute of Mental Health Child Emotional Faces Picture Set, real-world affective faces, and psychological image collection at Stirling databases. The landmark-based method has achieved a significantly higher accuracy, achieving an average of 47.44% compared with most of the deep networks (< 36%) and the emotion categorization tools (<37%) with considerably less execution time. This highlights that out-of-distribution emotion categorization is a much harder task due to detecting underlying emotional cues than emotion categorization in-distribution where superficial patterns are detected to > 97% accuracy. Impact Statement-Recognising emotions from people's faces has real-world applications for computer-based perception as it is often vital for interpersonal communication. Emotion recognition tasks nowadays are addressed using deep learning models that model colour distribution so classify images rather than emotion. This homogeneous knowledge representation is in contrast to emotion categorization, which is hypothesised as more heterogeneous landmark-based. This is investigated through out-of-distribution emotion categorization problems, where the test samples are drawn from a different dataset to training images. Our landmark-based method achieves a significantly higher classification performance (on average) compared with four state-of-the-art deep networks (EfficientNetB0, InceptionV3, ResNet50 and VGG19), as well as other emotion categorization tools such as Py-Feat and the Azure Face API. We conclude that this improved generalization is relevant for future developments of emotion categorization tools. © 2021 IEEE.
引用
收藏
页码:564 / 573
页数:9
相关论文
共 50 条
  • [11] On the Learnability of Out-of-distribution Detection
    Fang, Zhen
    Li, Yixuan
    Liu, Feng
    Han, Bo
    Lu, Jie
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [12] OOD ATTACK: GENERATING OVERCONFIDENT OUT-OF-DISTRIBUTION EXAMPLES TO FOOL DEEP NEURAL CLASSIFIERS
    Tang, Keke
    Cai, Xujian
    Peng, Weilong
    Li, Shudong
    Wang, Wenping
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1260 - 1264
  • [13] Distribution Shift Inversion for Out-of-Distribution Prediction
    Yu, Runpeng
    Liu, Songhua
    Yang, Xingyi
    Wang, Xinchao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 3592 - 3602
  • [14] COMBOOD: A Semiparametric Approach for Detecting Out-of-distribution Data for Image Classification
    Rajasekaran, Magesh
    Sajol, Md Saiful Islam
    Berglind, Frej
    Mukhopadhyay, Supratik
    Das, Kamalika
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 643 - 651
  • [15] Gaussian-Based Approach for Out-of-Distribution Detection in Deep Learning
    Carvalho, Thiago
    Vellasco, Marley
    Amaral, Jose Franco
    24TH INTERNATIONAL CONFERENCE ON ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EAAAI/EANN 2023, 2023, 1826 : 303 - 314
  • [16] CMG: A Class-Mixed Generation Approach to Out-of-Distribution Detection
    Wang, Mengyu
    Shao, Yijia
    Lin, Haowei
    Hu, Wenpeng
    Liu, Bing
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT IV, 2023, 13716 : 502 - 518
  • [17] RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection
    Khalid, Umar
    Esmaeili, Ashkan
    Karim, Nazmul
    Rahnavard, Nazanin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 163 - 170
  • [18] Out-of-distribution in Human Activity Recognition
    Roy, Debaditya
    Komini, Vangjush
    Girdzijauskas, Sarunas
    2022 34TH WORKSHOP OF THE SWEDISH ARTIFICIAL INTELLIGENCE SOCIETY (SAIS 2022), 2022, : 1 - 10
  • [19] Out-of-Distribution Detection for Automotive Perception
    Nitsch, Julia
    Itkina, Masha
    Senanayake, Ransalu
    Nieto, Juan
    Schmidt, Max
    Siegwart, Roland
    Kochenderfer, Mykel J.
    Cadena, Cesar
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 2938 - 2943
  • [20] Decoupling MaxLogit for Out-of-Distribution Detection
    Zhang, Zihan
    Xiang, Xiang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 3388 - 3397