Consistency, Uncertainty or Inconsistency Detection in Multimodal Emotion Recognition

被引:0
|
作者
Fantini, Alessia [1 ,2 ]
Pilato, Giovanni [2 ]
Vitale, Gianpaolo [2 ]
机构
[1] Univ Pisa, Pisa, Italy
[2] CNR, ICAR, Italian Natl Res Council, Palermo, Italy
关键词
Emotion Detection; Mood; Human-Robot Interaction; Inconsistency detection; ARCHITECTURE;
D O I
10.1109/IRC59093.2023.00067
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Humans exploit several sensory channels to recognize emotions and combine the information coming from the different channels into a single perception. Emotion Perception (EP) is also closely related to the Theory of Mind (ToM), which includes processes that capture socially and emotionally related inputs; furthermore, it interprets their meaning and direct responses accordingly. In this paper, we present a first step towards recognizing incoherence in emotions that exploits a three-level cognitive architecture. Starting with multimodal emotion recognition, a decision-maker determines whether a situation of consistency, uncertainty, or inconsistency exists and ultimately attempts to identify which case occurs. The detection is based on a suitable vector representation, in the conceptual level of the architecture, of moods on Russel diagram. A system designed in this way can impact HRI in terms of effectiveness by allowing a robot to get an idea about the actual emotional state of the person it interacts with.
引用
收藏
页码:377 / 380
页数:4
相关论文
共 50 条
  • [21] A Multimodal Corpus for Emotion Recognition in Sarcasm
    Ray, Anupama
    Mishra, Shubham
    Nunna, Apoorva
    Bhattacharyya, Pushpak
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6992 - 7003
  • [22] Multimodal human emotion/expression recognition
    Chen, LS
    Huang, TS
    Miyasato, T
    Nakatsu, R
    AUTOMATIC FACE AND GESTURE RECOGNITION - THIRD IEEE INTERNATIONAL CONFERENCE PROCEEDINGS, 1998, : 366 - 371
  • [23] Emotion Recognition using Multimodal Features
    Zhao, Jinming
    Chen, Shizhe
    Wang, Shuai
    Jin, Qin
    2018 FIRST ASIAN CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII ASIA), 2018,
  • [24] A Multimodal Dataset for Mixed Emotion Recognition
    Yang, Pei
    Liu, Niqi
    Liu, Xinge
    Shu, Yezhi
    Ji, Wenqi
    Ren, Ziqi
    Sheng, Jenny
    Yu, Minjing
    Yi, Ran
    Zhang, Dan
    Liu, Yong-Jin
    SCIENTIFIC DATA, 2024, 11 (01)
  • [25] Multimodal approaches for emotion recognition: A survey
    Sebe, N
    Cohen, I
    Gevers, T
    Huang, TS
    INTERNET IMAGING VI, 2005, 5670 : 56 - 67
  • [26] Multimodal Emotion Recognition Based on the Decoupling of Emotion and Speaker Information
    Gajsek, Rok
    Struc, Vitomir
    Mihelic, France
    TEXT, SPEECH AND DIALOGUE, 2010, 6231 : 275 - 282
  • [27] SR-CIBN: Semantic relationship-based consistency and inconsistency balancing network for multimodal fake news detection
    Yu, Hongzhu
    Wu, Hongchen
    Fang, Xiaochang
    Li, Meng
    Zhang, Huaxiang
    NEUROCOMPUTING, 2025, 635
  • [28] Comparing Recognition Performance and Robustness of Multimodal Deep Learning Models for Multimodal Emotion Recognition
    Liu, Wei
    Qiu, Jie-Lin
    Zheng, Wei-Long
    Lu, Bao-Liang
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (02) : 715 - 729
  • [29] COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition
    Tellamekala, Mani Kumar
    Amiriparian, Shahin
    Schuller, Bjorn W.
    Andre, Elisabeth
    Giesbrecht, Timo
    Valstar, Michel
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (02) : 805 - 822
  • [30] Length Uncertainty-Aware Graph Contrastive Fusion Network for multimodal physiological signal emotion recognition
    Li, Guangqiang
    Chen, Ning
    Zhu, Hongqing
    Li, Jing
    Xu, Zhangyong
    Zhu, Zhiying
    NEURAL NETWORKS, 2025, 187