Affect in Multimodal Information

被引:9
|
作者
Esposito, Anna [1 ]
机构
[1] Univ Naples 2, Dipartimento Psicol, I-81100 Caserta, Italy
关键词
NONVERBAL-COMMUNICATION; VOCAL COMMUNICATION; FACIAL EXPRESSIONS; FEATURE-EXTRACTION; EMOTION; SPEECH; FACE; RECOGNITION; PERCEPTION; ANIMATION;
D O I
10.1007/978-1-84800-306-4_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In face-to-face communication, file emotional state of the speaker is transmitted to the listener through a synthetic process that involves both the verbal and the nonverbal modalities of communication. From this point of view, the transmission of the information content is redundant, because the same information is transferred through several channels as well. How much information about the speaker's emotional state is transmitted by each channel and which channel plays the major role in transferring such information? The present study tries to answer these questions through a perceptual experiment that evaluates the subjective perception of emotional states through the single (either visual or auditory channel) and the combined channels (visual and auditory). Results seem to show that, taken separately, the semantic content of the message and the visual content of the message carry the same amount of information as the combined channels, suggesting that each channel performs a robust encoding of the emotional features that is very helpful in recovering the perception of the emotional state when one of file channels is degraded by noise.
引用
收藏
页码:203 / 226
页数:24
相关论文
共 50 条
  • [41] Multimodal Affect Detection in the Wild: Accuracy, Availability, and Generalizability
    Bosch, Nigel
    ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2015, : 645 - 649
  • [42] Construction of Multimodal Transport Information Platform
    Wang, Ya
    Cheng, Yu
    Zhao, Zhi
    2018 4TH INTERNATIONAL CONFERENCE ON ENERGY MATERIALS AND ENVIRONMENT ENGINEERING (ICEMEE 2018), 2018, 38
  • [43] A framework for the intelligent multimodal presentation of information
    Rousseau, Cyril
    Bellik, Yacine
    Vernier, Frederic
    Bazalgette, Didier
    SIGNAL PROCESSING, 2006, 86 (12) : 3696 - 3713
  • [44] Conditioning Multimodal Information for Smart Environments
    Looney, D.
    Rehman, N. Ur
    Mandic, D.
    Rutkowski, T. M.
    Heidenreich, A.
    Beyer, D.
    2009 THIRD ACM/IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED SMART CAMERAS, 2009, : 306 - +
  • [45] MULTIMODAL PRESENTATION OF INFORMATION IN A MOBILE CONTEXT
    Jacquet, Christophe
    Bourda, Yolaine
    Bellik, Yacine
    ADVANCED INTELLIGENT ENVIRONMENTS, 2009, : 67 - +
  • [46] The Multimodal Dataset of Negative Affect and Aggression: A Validation Study
    Lefter, Iulia
    Fitrianie, Siska
    ICMI'18: PROCEEDINGS OF THE 20TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2018, : 376 - 383
  • [47] Multimodal integration of haptics, speech, and affect in an educational environment
    Nijholt, A
    INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 2, PROCEEDINGS, 2004, : 94 - 97
  • [48] Does temporal asynchrony affect multimodal curvature detection?
    Sara A. Winges
    Stephanie E. Eonta
    John F. Soechting
    Experimental Brain Research, 2010, 203 : 1 - 9
  • [49] Exploring Multimodal Visual Features for Continuous Affect Recognition
    Sun, Bo
    Cao, Siming
    Li, Liandong
    He, Jun
    Yu, Lejun
    PROCEEDINGS OF THE 6TH INTERNATIONAL WORKSHOP ON AUDIO/VISUAL EMOTION CHALLENGE (AVEC'16), 2016, : 83 - 88
  • [50] Acquisition and application of multimodal sensing information
    Yin, Xukun
    Jiang, Changhui
    Zheng, Huadan
    Sampaolo, Angelo
    Xu, Kaijie
    FRONTIERS IN PHYSICS, 2023, 11