Cortical operational synchrony during audio-visual speech integration

被引:46
|
作者
Fingelkurts, AA
Fingelkurts, AA
Krause, CM
Möttönen, R
Sams, M
机构
[1] Moscow MV Lomonosov State Univ, Human Physiol Dept, Human Brain Res Grp, Moscow 119899, Russia
[2] BM Sci Brain & Mind Technol Res Ctr, FI-02601 Espoo, Finland
[3] Univ Helsinki, Cognit Sci Dept Psychol, FIN-00014 Helsinki, Finland
[4] Aalto Univ, Lab Computat Engn, Helsinki 02015, Finland
关键词
multisensory integration; crossmodal; audio-visual; synchronization; operations; large-scale networks; MEG;
D O I
10.1016/S0093-934X(03)00059-2
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Information from different sensory modalities is processed in different cortical regions. However, our daily perception is based on the overall impression resulting from the integration of information from multiple sensory modalities. At present it is not known how the human brain integrates information from different modalities into a unified percept. Using a robust phenomenon known as the McGurk effect it was shown in the present study that audio-visual synthesis takes place within a distributed and dynamic cortical networks with emergent properties. Various cortical sites within these networks interact with each other by means of so-called operational synchrony (Kaplan, Fingelkurts, Fingelkurts, & Darkhovsky, 1997). The temporal synchronization of cortical operations processing unimodal stimuli at different cortical sites reveals the importance of the temporal features of auditory and visual stimuli for audio-visual speech integration. (C) 2003 Elsevier Science (USA). All rights reserved.
引用
收藏
页码:297 / 312
页数:16
相关论文
共 50 条
  • [31] Optimum integration weight for decision fusion audio-visual speech recognition
    Rajavel, R.
    Sathidevi, P. S.
    [J]. INTERNATIONAL JOURNAL OF COMPUTATIONAL SCIENCE AND ENGINEERING, 2015, 10 (1-2) : 145 - 154
  • [32] Neural processing of audio-visual integration in speech perception: An MEG study
    Hiroe, Nobuo
    Shinozaki, Jun
    Yoshioka, Taku
    Sato, Masa-aki
    Sekiyama, Kaoru
    [J]. NEUROSCIENCE RESEARCH, 2010, 68 : E332 - E332
  • [33] Audio-Visual Multi-Channel Integration and Recognition of Overlapped Speech
    Yu, Jianwei
    Zhang, Shi-Xiong
    Wu, Bo
    Liu, Shansong
    Hu, Shoukang
    Geng, Mengzhe
    Liu, Xunying
    Meng, Helen
    Yu, Dong
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 2067 - 2082
  • [34] Separation of audio-visual speech sources: A new approach exploiting the audio-visual coherence of speech stimuli
    Sodoyer, D
    Schwartz, JL
    Girin, L
    Klinkisch, J
    Jutten, C
    [J]. EURASIP JOURNAL ON APPLIED SIGNAL PROCESSING, 2002, 2002 (11) : 1165 - 1173
  • [35] Multimodal Integration for Large-Vocabulary Audio-Visual Speech Recognition
    Yu, Wentao
    Zeiler, Steffen
    Kolossa, Dorothea
    [J]. 28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 341 - 345
  • [36] Separation of audio-visual speech sources: A new approach exploiting the audio-visual coherence of speech stimuli
    [J]. Sodoyer, D. (sodoyer@icp.inpg.fr), 1600, Hindawi Publishing Corporation (2002):
  • [37] Separation of Audio-Visual Speech Sources: A New Approach Exploiting the Audio-Visual Coherence of Speech Stimuli
    David Sodoyer
    Jean-Luc Schwartz
    Laurent Girin
    Jacob Klinkisch
    Christian Jutten
    [J]. EURASIP Journal on Advances in Signal Processing, 2002
  • [38] Effects of audio-visual integration on the detection of masked speech and non-speech sounds
    Eramudugolla, Ranmalee
    Henderson, Rachel
    Mattingley, Jason B.
    [J]. BRAIN AND COGNITION, 2011, 75 (01) : 60 - 66
  • [39] A cortical circuit for audio-visual predictions
    Aleena R. Garner
    Georg B. Keller
    [J]. Nature Neuroscience, 2022, 25 : 98 - 105
  • [40] A cortical circuit for audio-visual predictions
    Garner, Aleena R.
    Keller, Georg B.
    [J]. NATURE NEUROSCIENCE, 2022, 25 (01) : 98 - +