Matching heard and seen speech: An ERP study of audiovisual word recognition

被引:12
|
作者
Kaganovich, Natalya [1 ,2 ]
Schumaker, Jennifer [1 ]
Rowland, Courtney [1 ]
机构
[1] Purdue Univ, Dept Speech Language & Hearing Sci, Lyles Porter Hall,715 Clin Dr, W Lafayette, IN 47907 USA
[2] Purdue Univ, Dept Psychol Sci, 703 Third St, W Lafayette, IN 47907 USA
基金
美国国家卫生研究院;
关键词
AUDITORY-VISUAL INTEGRATION; LEARNING-DISABILITIES; BRAIN POTENTIALS; PERCEPTION; CHILDREN; MEMORY; RETRIEVAL; COMPONENT; ADULTS;
D O I
10.1016/j.bandl.2016.04.010
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Seeing articulatory gestures while listening to speech-in-noise (SIN) significantly improves speech understanding. However, the degree of this improvement varies greatly among individuals. We examined a relationship between two distinct stages of visual articulatory processing and the SIN accuracy by combining a cross-modal repetition priming task with ERP recordings. Participants first heard a word referring to a common object (e.g., pumpkin) and then decided whether the subsequently presented visual silent articulation matched the word they had just heard. Incongruent articulations elicited a significantly enhanced N400, indicative of a mismatch detection at the pre-lexical level. Congruent articulations elicited a significantly larger LPC, indexing articulatory word recognition. Only the N400 difference between incongruent and congruent trials was significantly correlated with individuals' SIN accuracy improvement in the presence of the talker's face. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:14 / 24
页数:11
相关论文
共 50 条
  • [1] Fusion Architectures for Word-based Audiovisual Speech Recognition
    Wand, Michael
    Schmidhuber, Jurgen
    INTERSPEECH 2020, 2020, : 3491 - 3495
  • [2] Deficient Audiovisual Speech Perception in Schizophrenia: An ERP Study
    Ghaneirad, Erfan
    Saenger, Ellyn
    Szycik, Gregor R.
    Cus, Anja
    Moede, Laura
    Sinke, Christopher
    Wiswede, Daniel
    Bleich, Stefan
    Borgolte, Anna
    BRAIN SCIENCES, 2023, 13 (06)
  • [3] SPEECH CAN BE SEEN BEFORE IT IS HEARD
    CATHIARD, MA
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 1992, 27 (3-4) : 88 - 88
  • [4] Is the integration of heard and seen speech mandatory for infants?
    Desjardins, RN
    Werker, JF
    DEVELOPMENTAL PSYCHOBIOLOGY, 2004, 45 (04) : 187 - 203
  • [5] Functional activation for imitation of seen and heard speech
    Irwin, Julia R.
    Frost, Stephen J.
    Mencl, W. Einar
    Chen, Helen
    Fowler, Carol A.
    JOURNAL OF NEUROLINGUISTICS, 2011, 24 (06) : 611 - 618
  • [6] ALIGNING AUDIOVISUAL FEATURES FOR AUDIOVISUAL SPEECH RECOGNITION
    Tao, Fei
    Busso, Carlos
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [7] PARSING AND WORD MATCHING IN LINCOLN LABORATORY SPEECH RECOGNITION SYSTEM
    HALL, DE
    FORGIE, JW
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1974, 56 : S27 - S27
  • [8] The Role of Audiovisual Speech in the Early Stages of Lexical Processing as Revealed by the ERP Word Repetition Effect
    Basirat, Anahita
    Brunelliere, Angele
    Hartsuiker, Robert
    LANGUAGE LEARNING, 2018, 68 : 80 - 101
  • [9] Audiovisual synchrony perception of simplified speech sounds heard as speech and non-speech
    Asakawa, Kaori
    Tanaka, Akihiro
    Sakamoto, Shuichi
    Iwaya, Yukio
    Suzuki, Yoiti
    ACOUSTICAL SCIENCE AND TECHNOLOGY, 2011, 32 (03) : 125 - 128
  • [10] AUDIOVISUAL PERCEPTION OF NATURAL SPEECH IS IMPAIRED IN ADULT DYSLEXICS: AN ERP STUDY
    Ruesseler, J.
    Gerth, I.
    Heldmann, M.
    Muente, T. F.
    NEUROSCIENCE, 2015, 287 : 55 - 65