Affect in Multimodal Information

被引:9
|
作者
Esposito, Anna [1 ]
机构
[1] Univ Naples 2, Dipartimento Psicol, I-81100 Caserta, Italy
关键词
NONVERBAL-COMMUNICATION; VOCAL COMMUNICATION; FACIAL EXPRESSIONS; FEATURE-EXTRACTION; EMOTION; SPEECH; FACE; RECOGNITION; PERCEPTION; ANIMATION;
D O I
10.1007/978-1-84800-306-4_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In face-to-face communication, file emotional state of the speaker is transmitted to the listener through a synthetic process that involves both the verbal and the nonverbal modalities of communication. From this point of view, the transmission of the information content is redundant, because the same information is transferred through several channels as well. How much information about the speaker's emotional state is transmitted by each channel and which channel plays the major role in transferring such information? The present study tries to answer these questions through a perceptual experiment that evaluates the subjective perception of emotional states through the single (either visual or auditory channel) and the combined channels (visual and auditory). Results seem to show that, taken separately, the semantic content of the message and the visual content of the message carry the same amount of information as the combined channels, suggesting that each channel performs a robust encoding of the emotional features that is very helpful in recovering the perception of the emotional state when one of file channels is degraded by noise.
引用
收藏
页码:203 / 226
页数:24
相关论文
共 50 条
  • [31] Mothers' multimodal information processing is modulated by multimodal interactions with their infants
    Tanaka, Yukari
    Fukushima, Hirokata
    Okanoya, Kazuo
    Myowa-Yamakoshi, Masako
    SCIENTIFIC REPORTS, 2014, 4
  • [32] Mothers' multimodal information processing is modulated by multimodal interactions with their infants
    Yukari Tanaka
    Hirokata Fukushima
    Kazuo Okanoya
    Masako Myowa-Yamakoshi
    Scientific Reports, 4
  • [33] Fast Multimodal Trajectory Prediction for Vehicles Based on Multimodal Information Fusion
    Ge, Likun
    Wang, Shuting
    Wang, Guangqi
    ACTUATORS, 2025, 14 (03)
  • [34] Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations
    Mai S.
    Zeng Y.
    Hu H.
    IEEE Transactions on Multimedia, 2023, 25 : 4121 - 4134
  • [35] Does temporal asynchrony affect multimodal curvature detection?
    Winges, Sara A.
    Eonta, Stephanie E.
    Soechting, John F.
    EXPERIMENTAL BRAIN RESEARCH, 2010, 203 (01) : 1 - 9
  • [36] Emergent information for multimodal perception and control
    Stoffregen, T. A.
    Bardy, B. G.
    INTERNATIONAL JOURNAL OF SPORT PSYCHOLOGY, 2010, 41 : 39 - 41
  • [37] Gaussian Process Dynamical Models for Multimodal Affect Recognition
    Garcia, Hernan F.
    Alvarez, Mauricio A.
    Orozco, Alvaro A.
    2016 38TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2016, : 850 - 853
  • [38] Multimodal Affect: Perceptually Evaluating an Affective Talking Head
    Legde, Katharina
    Castillo, Susana
    Cunningham, Douglas W.
    ACM TRANSACTIONS ON APPLIED PERCEPTION, 2015, 12 (04)
  • [39] MULTIMODAL AFFECT MODELING AND RECOGNITION FOR EMPATHIC ROBOT COMPANIONS
    Castellano, Ginevra
    Leite, Iolanda
    Pereira, Andre
    Martinho, Carlos
    Paiva, Ana
    McOwan, Peter W.
    INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, 2013, 10 (01)
  • [40] Does the menstrual cycle affect the multimodal ultrasound tomography?
    Forte, Serafino
    Dellas, Sophie
    Stieltjes, Bram
    Bongartz, Georg
    ACTA RADIOLOGICA, 2019, 60 (07) : 846 - 851