Cortical operational synchrony during audio-visual speech integration

被引:46
|
作者
Fingelkurts, AA
Fingelkurts, AA
Krause, CM
Möttönen, R
Sams, M
机构
[1] Moscow MV Lomonosov State Univ, Human Physiol Dept, Human Brain Res Grp, Moscow 119899, Russia
[2] BM Sci Brain & Mind Technol Res Ctr, FI-02601 Espoo, Finland
[3] Univ Helsinki, Cognit Sci Dept Psychol, FIN-00014 Helsinki, Finland
[4] Aalto Univ, Lab Computat Engn, Helsinki 02015, Finland
关键词
multisensory integration; crossmodal; audio-visual; synchronization; operations; large-scale networks; MEG;
D O I
10.1016/S0093-934X(03)00059-2
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Information from different sensory modalities is processed in different cortical regions. However, our daily perception is based on the overall impression resulting from the integration of information from multiple sensory modalities. At present it is not known how the human brain integrates information from different modalities into a unified percept. Using a robust phenomenon known as the McGurk effect it was shown in the present study that audio-visual synthesis takes place within a distributed and dynamic cortical networks with emergent properties. Various cortical sites within these networks interact with each other by means of so-called operational synchrony (Kaplan, Fingelkurts, Fingelkurts, & Darkhovsky, 1997). The temporal synchronization of cortical operations processing unimodal stimuli at different cortical sites reveals the importance of the temporal features of auditory and visual stimuli for audio-visual speech integration. (C) 2003 Elsevier Science (USA). All rights reserved.
引用
收藏
页码:297 / 312
页数:16
相关论文
共 50 条
  • [21] Robust audio-visual speech recognition based on late integration
    Lee, Jong-Seok
    Park, Cheol Hoon
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2008, 10 (05) : 767 - 779
  • [22] An audio-visual speech recognition with a new mandarin audio-visual database
    Liao, Wen-Yuan
    Pao, Tsang-Long
    Chen, Yu-Te
    Chang, Tsun-Wei
    [J]. INT CONF ON CYBERNETICS AND INFORMATION TECHNOLOGIES, SYSTEMS AND APPLICATIONS/INT CONF ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, 2007, : 19 - +
  • [23] Expressive audio-visual speech
    Bevacqua, E
    Pelachaud, C
    [J]. COMPUTER ANIMATION AND VIRTUAL WORLDS, 2004, 15 (3-4) : 297 - 304
  • [24] Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis
    Yang, Karren
    Markovic, Dejan
    Krenn, Steven
    Agrawal, Vasu
    Richard, Alexander
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 8217 - 8227
  • [25] Audio-visual integration in schizophrenia
    de Gelder, B
    Vroomen, J
    Annen, L
    Masthof, E
    Hodiamont, P
    [J]. SCHIZOPHRENIA RESEARCH, 2003, 59 (2-3) : 211 - 218
  • [26] Retinotopic effects during spatial audio-visual integration
    Meienbrock, A.
    Naumer, M. J.
    Doehrmann, O.
    Singer, W.
    Muckli, L.
    [J]. NEUROPSYCHOLOGIA, 2007, 45 (03) : 531 - 539
  • [27] Audio-visual speech recognition based on joint training with audio-visual speech enhancement for robust speech recognition
    Hwang, Jung-Wook
    Park, Jeongkyun
    Park, Rae-Hong
    Park, Hyung-Min
    [J]. APPLIED ACOUSTICS, 2023, 211
  • [28] An audio-visual speech recognition system for testing new audio-visual databases
    Pao, Tsang-Long
    Liao, Wen-Yuan
    [J]. VISAPP 2006: PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 2, 2006, : 192 - +
  • [29] LEARNING CONTEXTUALLY FUSED AUDIO-VISUAL REPRESENTATIONS FOR AUDIO-VISUAL SPEECH RECOGNITION
    Zhang, Zi-Qiang
    Zhang, Jie
    Zhang, Jian-Shu
    Wu, Ming-Hui
    Fang, Xin
    Dai, Li-Rong
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1346 - 1350
  • [30] Optimum integration weight for decision fusion audio-visual speech recognition
    Rajavel, R.
    Sathidevi, P. S.
    [J]. INTERNATIONAL JOURNAL OF COMPUTATIONAL SCIENCE AND ENGINEERING, 2015, 10 (1-2) : 145 - 154