Convolutional neural networks reveal differences in action units of facial expressions between face image databases developed in different countries

被引:1
|
作者
Inagaki, Mikio [1 ,2 ]
Ito, Tatsuro [3 ]
Shinozaki, Takashi [2 ,4 ]
Fujita, Ichiro [1 ,2 ,5 ]
机构
[1] Osaka Univ, Grad Sch Frontier Biosci, Suita, Osaka, Japan
[2] Natl Inst Informat & Commun Technol Suita, Ctr Informat & Neural Networks, Osaka, Japan
[3] Osaka Univ, Sch Engn Sci, Toyonaka, Osaka, Japan
[4] Kindai Univ, Fac Informat, Higashi Osaka, Osaka, Japan
[5] Ritsumeikan Univ, Res Org Sci & Technol, Kusatsu, Shiga, Japan
来源
FRONTIERS IN PSYCHOLOGY | 2022年 / 13卷
关键词
facial expression; emotion; facial movement; transfer learning; supervised learning; cultural universality; AlexNet; action unit; EMOTIONAL EXPRESSIONS; SELECTIVE RESPONSES; SINGLE NEURONS; RECOGNITION; CHALLENGES; IDENTITY; MODELS;
D O I
10.3389/fpsyg.2022.988302
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Cultural similarities and differences in facial expressions have been a controversial issue in the field of facial communications. A key step in addressing the debate regarding the cultural dependency of emotional expression (and perception) is to characterize the visual features of specific facial expressions in individual cultures. Here we developed an image analysis framework for this purpose using convolutional neural networks (CNNs) that through training learned visual features critical for classification. We analyzed photographs of facial expressions derived from two databases, each developed in a different country (Sweden and Japan), in which corresponding emotion labels were available. While the CNNs reached high rates of correct results that were far above chance after training with each database, they showed many misclassifications when they analyzed faces from the database that was not used for training. These results suggest that facial features useful for classifying facial expressions differed between the databases. The selectivity of computational units in the CNNs to action units (AUs) of the face varied across the facial expressions. Importantly, the AU selectivity often differed drastically between the CNNs trained with the different databases. Similarity and dissimilarity of these tuning profiles partly explained the pattern of misclassifications, suggesting that the AUs are important for characterizing the facial features and differ between the two countries. The AU tuning profiles, especially those reduced by principal component analysis, are compact summaries useful for comparisons across different databases, and thus might advance our understanding of universality vs. specificity of facial expressions across cultures.
引用
收藏
页数:16
相关论文
共 6 条
  • [1] Facial Action Units for Training Convolutional Neural Networks
    Trinh Thi Doan Pham
    Won, Chee Sun
    [J]. IEEE ACCESS, 2019, 7 : 77816 - 77824
  • [2] Impact of Facial Expressions and Posture Variations in Face Recognition Rate on Different Image Databases
    Jaturawat, Phichaya
    Phankokkruad, Manop
    [J]. ADVANCED SCIENCE LETTERS, 2017, 23 (06) : 5443 - 5447
  • [3] Convolutional neural networks trained with a developmental sequence of blurry to clear images reveal core differences between face and object processing
    Jang, Hojin
    Tong, Frank
    [J]. JOURNAL OF VISION, 2021, 21 (12):
  • [4] Differences in Facial Expressions between Spontaneous and Posed Smiles: Automated Method by Action Units and Three-Dimensional Facial Landmarks
    Park, Seho
    Lee, Kunyoung
    Lim, Jae-A
    Ko, Hyunwoong
    Kim, Taehoon
    Lee, Jung-In
    Kim, Hakrim
    Han, Seong-Jae
    Kim, Jeong-Shim
    Park, Soowon
    Lee, Jun-Young
    Lee, Eui Chul
    [J]. SENSORS, 2020, 20 (04)
  • [5] Graph Convolutional Neural Networks for Micro-Expression Recognition-Fusion of Facial Action Units for Optical Flow Extraction
    Yang, Xuliang
    Fang, Yong
    Rodolfo Jr, C. Raga
    [J]. IEEE ACCESS, 2024, 12 : 76319 - 76328
  • [6] EMOTIONS IN MOTION: THE PERCEPTION OF DYNAMIC AND STATIC EMOTIONAL FACIAL EXPRESSIONS INVESTIGATED BY ERPS AND FMRI CONSTRAINED SOURCE ANALYSIS REVEAL DIFFERENT SPATIO-TEMPORAL NEURAL NETWORKS
    Trautmann, Sina A.
    Dominguez-Borras, Judith
    Escera, Carles
    Fehr, Thorsten
    Herrmann, Manfred
    [J]. PSYCHOPHYSIOLOGY, 2009, 46 : S51 - S51