Evidence for a supra-modal representation of emotion from cross-modal adaptation

被引:23
|
作者
Pye, Annie [1 ]
Bestelmeyer, Patricia E. G. [1 ]
机构
[1] Bangor Univ, Sch Psychol, Bangor LL57 2AS, Gwynedd, Wales
关键词
Supra-modal representation; Cross-modal; Adaptation; Emotion; Voice; NEURAL REPRESENTATIONS; AUDITORY ADAPTATION; VISUAL-ADAPTATION; FACIAL IDENTITY; FACE; EXPRESSION; VOICE; PERCEPTION; SEX; SYSTEM;
D O I
10.1016/j.cognition.2014.11.001
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Successful social interaction hinges on accurate perception of emotional signals. These signals are typically conveyed multi-modally by the face and voice. Previous research has demonstrated uni-modal contrastive aftereffects for emotionally expressive faces or voices. Here we were interested in whether these aftereffects transfer across modality as theoretical models predict. We show that adaptation to facial expressions elicits significant auditory aftereffects. Adaptation to angry facial expressions caused ambiguous vocal stimuli drawn from an anger-fear morphed continuum to be perceived as less angry and more fearful relative to adaptation to fearful faces. In a second experiment, we demonstrate that these aftereffects are not dependent on learned face-voice congruence, i.e. adaptation to one facial identity transferred to an unmatched voice identity. Taken together, our findings provide support for a supra-modal representation of emotion and suggest further that identity and emotion may be processed independently from one another, at least at the supra-modal level of the processing hierarchy. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:245 / 251
页数:7
相关论文
共 50 条
  • [41] The mapping of emotion words onto space: A cross-modal study
    Marmolejo-Ramos, Fernando
    Montoro, Pedro R.
    Jose Contreras, Maria
    Rosa Elosua, Maria
    COGNITIVE PROCESSING, 2015, 16 : S95 - S95
  • [42] A cross-modal crowd counting method combining CNN and cross-modal transformer
    Zhang, Shihui
    Wang, Wei
    Zhao, Weibo
    Wang, Lei
    Li, Qunpeng
    IMAGE AND VISION COMPUTING, 2023, 129
  • [43] Cross-Modal Center Loss for 3D Cross-Modal Retrieval
    Jing, Longlong
    Vahdani, Elahe
    Tan, Jiaxing
    Tian, Yingli
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3141 - 3150
  • [44] Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information
    Yang, Pengcheng
    Zhang, Zhihan
    Luo, Fuli
    Li, Lei
    Huang, Chengyang
    Sun, Xu
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 2680 - 2686
  • [45] A semi-supervised cross-modal memory bank for cross-modal retrieval
    Huang, Yingying
    Hu, Bingliang
    Zhang, Yipeng
    Gao, Chi
    Wang, Quan
    NEUROCOMPUTING, 2024, 579
  • [46] Unsupervised Cross-Modal Audio Representation Learning from Unstructured Multilingual Text
    Schindler, Alexander
    Gordea, Sergiu
    Knees, Peter
    PROCEEDINGS OF THE 35TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING (SAC'20), 2020, : 706 - 713
  • [47] Cross-modal interactions in language production: evidence from word learning
    Pinet, Svetlana
    Martin, Clara D.
    PSYCHONOMIC BULLETIN & REVIEW, 2024, : 452 - 462
  • [48] CROSS-MODAL PLASTICITY IN THE HUMAN THALAMUS: EVIDENCE FROM INTRAOPERATIVE MACROSTIMULATIONS
    Jetzer, A. K.
    Morel, A.
    Magnin, M.
    Jeanmonod, D.
    NEUROSCIENCE, 2009, 164 (04) : 1867 - 1875
  • [49] When are implicit agents encoded? Evidence from cross-modal naming
    Melinger, A
    Mauner, G
    BRAIN AND LANGUAGE, 1999, 68 (1-2) : 185 - 191
  • [50] Attention and cross-modal processing: Evidence from heart rate analyses
    Robinson, Christopher W.
    Sloutsky, Vladimir M.
    COGNITION IN FLUX, 2010, : 2639 - 2643