Evidence for a supra-modal representation of emotion from cross-modal adaptation

被引:23
|
作者
Pye, Annie [1 ]
Bestelmeyer, Patricia E. G. [1 ]
机构
[1] Bangor Univ, Sch Psychol, Bangor LL57 2AS, Gwynedd, Wales
关键词
Supra-modal representation; Cross-modal; Adaptation; Emotion; Voice; NEURAL REPRESENTATIONS; AUDITORY ADAPTATION; VISUAL-ADAPTATION; FACIAL IDENTITY; FACE; EXPRESSION; VOICE; PERCEPTION; SEX; SYSTEM;
D O I
10.1016/j.cognition.2014.11.001
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Successful social interaction hinges on accurate perception of emotional signals. These signals are typically conveyed multi-modally by the face and voice. Previous research has demonstrated uni-modal contrastive aftereffects for emotionally expressive faces or voices. Here we were interested in whether these aftereffects transfer across modality as theoretical models predict. We show that adaptation to facial expressions elicits significant auditory aftereffects. Adaptation to angry facial expressions caused ambiguous vocal stimuli drawn from an anger-fear morphed continuum to be perceived as less angry and more fearful relative to adaptation to fearful faces. In a second experiment, we demonstrate that these aftereffects are not dependent on learned face-voice congruence, i.e. adaptation to one facial identity transferred to an unmatched voice identity. Taken together, our findings provide support for a supra-modal representation of emotion and suggest further that identity and emotion may be processed independently from one another, at least at the supra-modal level of the processing hierarchy. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:245 / 251
页数:7
相关论文
共 50 条
  • [1] Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects
    Xia, Jing
    Zhang, Wei
    Jiang, Yizhou
    Li, You
    Chen, Qi
    CORTEX, 2018, 106 : 47 - 64
  • [2] ERP Evidence for a Supra-Modal Mechanism of Exogenous Spatial Attention
    Fallah, Mazyar
    Jones, Sarah M.
    Jordan, Heather
    CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY-REVUE CANADIENNE DE PSYCHOLOGIE EXPERIMENTALE, 2012, 66 (04): : 272 - 272
  • [3] A Supra-modal Decoding Mechanism: Evidence from Chinese Speakers Learning English
    Wang, Youhui
    Dong, Lanqing
    Dai, Weihui
    AICMI 2019 - NEUROMANAGEMENT AND INTELLIGENT COMPUTING METHOD ON MULTIMODAL INTERACTION, 2019,
  • [4] Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates
    Wang, Xiaodong
    Guo, Xiaotao
    Chen, Lin
    Liu, Yijun
    Goldberg, Michael E.
    Xu, Hong
    CEREBRAL CORTEX, 2017, 27 (02) : 1337 - 1346
  • [5] Cross-Modal Discrete Representation Learning
    Liu, Alexander H.
    Jin, SouYoung
    Lai, Cheng-I Jeff
    Rouditchenko, Andrew
    Oliva, Aude
    Glass, James
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3013 - 3035
  • [6] Cross-modal dynamic convolution for multi-modal emotion recognition
    Wen, Huanglu
    You, Shaodi
    Fu, Ying
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 78
  • [7] Spontaneous supra-modal encoding of number in the infant brain
    Gennari, Giulia
    Dehaene, Stanislas
    Valera, Chanel
    Dehaene-Lambertz, Ghislaine
    CURRENT BIOLOGY, 2023, 33 (10) : 1906 - +
  • [8] Effects of Age on Cross-Modal Emotion Perception
    Hunter, Edyta Monika
    Phillips, Louise H.
    MacPherson, Sarah E.
    PSYCHOLOGY AND AGING, 2010, 25 (04) : 779 - 787
  • [9] Contextual and Cross-Modal Interaction for Multi-Modal Speech Emotion Recognition
    Yang, Dingkang
    Huang, Shuai
    Liu, Yang
    Zhang, Lihua
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2093 - 2097
  • [10] Quaternion Representation Learning for cross-modal matching
    Wang, Zheng
    Xu, Xing
    Wei, Jiwei
    Xie, Ning
    Shao, Jie
    Yang, Yang
    KNOWLEDGE-BASED SYSTEMS, 2023, 270