Matching heard and seen speech: An ERP study of audiovisual word recognition

被引:12
|
作者
Kaganovich, Natalya [1 ,2 ]
Schumaker, Jennifer [1 ]
Rowland, Courtney [1 ]
机构
[1] Purdue Univ, Dept Speech Language & Hearing Sci, Lyles Porter Hall,715 Clin Dr, W Lafayette, IN 47907 USA
[2] Purdue Univ, Dept Psychol Sci, 703 Third St, W Lafayette, IN 47907 USA
基金
美国国家卫生研究院;
关键词
AUDITORY-VISUAL INTEGRATION; LEARNING-DISABILITIES; BRAIN POTENTIALS; PERCEPTION; CHILDREN; MEMORY; RETRIEVAL; COMPONENT; ADULTS;
D O I
10.1016/j.bandl.2016.04.010
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Seeing articulatory gestures while listening to speech-in-noise (SIN) significantly improves speech understanding. However, the degree of this improvement varies greatly among individuals. We examined a relationship between two distinct stages of visual articulatory processing and the SIN accuracy by combining a cross-modal repetition priming task with ERP recordings. Participants first heard a word referring to a common object (e.g., pumpkin) and then decided whether the subsequently presented visual silent articulation matched the word they had just heard. Incongruent articulations elicited a significantly enhanced N400, indicative of a mismatch detection at the pre-lexical level. Congruent articulations elicited a significantly larger LPC, indexing articulatory word recognition. Only the N400 difference between incongruent and congruent trials was significantly correlated with individuals' SIN accuracy improvement in the presence of the talker's face. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:14 / 24
页数:11
相关论文
共 50 条
  • [41] Word Embeddings for Speech Recognition
    Bengio, Samy
    Heigold, Georg
    15TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2014), VOLS 1-4, 2014, : 1053 - 1057
  • [42] Syllable frequency effects in French visual word recognition: An ERP study
    Goslin, Jeremy
    Grainger, Jonathan
    Holcomb, Phillip J.
    BRAIN RESEARCH, 2006, 1115 : 121 - 134
  • [43] Exploration of Properly Combined Audiovisual Representation with the Entropy Measure in Audiovisual Speech Recognition
    Vakhshiteh, Fatemeh
    Almasganj, Farshad
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2019, 38 (06) : 2523 - 2543
  • [44] Exploration of Properly Combined Audiovisual Representation with the Entropy Measure in Audiovisual Speech Recognition
    Fatemeh Vakhshiteh
    Farshad Almasganj
    Circuits, Systems, and Signal Processing, 2019, 38 : 2523 - 2543
  • [45] Effects of noise and noise reduction on audiovisual speech perception in cochlear implant users: An ERP study
    Layer, Natalie
    Abdel-Latif, Khaled H. A.
    Radecke, Jan-Ole
    Mueller, Verena
    Weglage, Anna
    Lang-Roth, Ruth
    Walger, Martin
    Sandmann, Pascale
    CLINICAL NEUROPHYSIOLOGY, 2023, 154 : 141 - 156
  • [46] The interplay of phonology and orthography in visual cognate word recognition: An ERP study
    Comesana, Montserrat
    Sanchez-Casas, Rosa
    Soares, Ana Paula
    Pinheiro, Ana P.
    Rauber, Andreia
    Frade, Sofia
    Fraga, Isabel
    NEUROSCIENCE LETTERS, 2012, 529 (01) : 75 - 79
  • [47] Lipreading and Audiovisual Speech Recognition Across the Adult Lifespan: Implications for Audiovisual Integration
    Tye-Murray, Nancy
    Spehar, Brent
    Myerson, Joel
    Hale, Sandra
    Sommers, Mitchell
    PSYCHOLOGY AND AGING, 2016, 31 (04) : 380 - 389
  • [48] Word learning: An ERP investigation of word experience effects on recognition and word processing
    Balass, Michal
    Nelson, Jessica R.
    Perfetti, Charles A.
    CONTEMPORARY EDUCATIONAL PSYCHOLOGY, 2010, 35 (02) : 126 - 140
  • [49] Evolving connectionist method for adaptive audiovisual speech recognition
    Malcangi M.
    Grew P.
    Malcangi, Mario (malcangi@di.unimi.it), 2017, Springer Verlag (08) : 85 - 94
  • [50] WaveNet With Cross-Attention for Audiovisual Speech Recognition
    Wang, Hui
    Gao, Fei
    Zhao, Yue
    Wu, Licheng
    IEEE ACCESS, 2020, 8 : 169160 - 169168