Matching heard and seen speech: An ERP study of audiovisual word recognition

被引:12
|
作者
Kaganovich, Natalya [1 ,2 ]
Schumaker, Jennifer [1 ]
Rowland, Courtney [1 ]
机构
[1] Purdue Univ, Dept Speech Language & Hearing Sci, Lyles Porter Hall,715 Clin Dr, W Lafayette, IN 47907 USA
[2] Purdue Univ, Dept Psychol Sci, 703 Third St, W Lafayette, IN 47907 USA
基金
美国国家卫生研究院;
关键词
AUDITORY-VISUAL INTEGRATION; LEARNING-DISABILITIES; BRAIN POTENTIALS; PERCEPTION; CHILDREN; MEMORY; RETRIEVAL; COMPONENT; ADULTS;
D O I
10.1016/j.bandl.2016.04.010
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Seeing articulatory gestures while listening to speech-in-noise (SIN) significantly improves speech understanding. However, the degree of this improvement varies greatly among individuals. We examined a relationship between two distinct stages of visual articulatory processing and the SIN accuracy by combining a cross-modal repetition priming task with ERP recordings. Participants first heard a word referring to a common object (e.g., pumpkin) and then decided whether the subsequently presented visual silent articulation matched the word they had just heard. Incongruent articulations elicited a significantly enhanced N400, indicative of a mismatch detection at the pre-lexical level. Congruent articulations elicited a significantly larger LPC, indexing articulatory word recognition. Only the N400 difference between incongruent and congruent trials was significantly correlated with individuals' SIN accuracy improvement in the presence of the talker's face. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:14 / 24
页数:11
相关论文
共 50 条
  • [21] Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition
    Stevenson, Ryan A.
    Nelms, Caitlin E.
    Baum, Sarah H.
    Zurkovsky, Lilia
    Barense, Morgan D.
    Newhouse, Paul A.
    Wallace, Mark T.
    NEUROBIOLOGY OF AGING, 2015, 36 (01) : 283 - 291
  • [22] Metrical stress in speech recognition: An explorative ERP study
    Bocker, KBE
    deGelder, B
    Vroomen, J
    Bastiaansen, MCM
    Brunia, CHM
    JOURNAL OF PSYCHOPHYSIOLOGY, 1997, 11 (01) : 90 - 90
  • [23] Integration of heard and seen speech: a factor in learning disabilities in children
    Hayes, EA
    Tiippana, K
    Nicol, TG
    Sams, M
    Kraus, N
    NEUROSCIENCE LETTERS, 2003, 351 (01) : 46 - 50
  • [24] Frequency and regularity effects on visual word recognition: an ERP study
    Simon, G
    Belmont, A
    Bruneteau, A
    Bernard, C
    Rebai, M
    INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2001, 41 (03) : 229 - 229
  • [25] Orthographic facilitation in Chinese spoken word recognition: An ERP study
    Zou, Lijuan
    Desroches, Amy S.
    Liu, Youyi
    Xia, Zhichao
    Shu, Hua
    BRAIN AND LANGUAGE, 2012, 123 (03) : 164 - 173
  • [26] An Efficient and Noise-Robust Audiovisual Encoder for Audiovisual Speech Recognition
    Li, Zhengyang
    Liang, Chenwei
    Lohrenz, Timo
    Sach, Marvin
    Moeller, Bjoern
    Fingscheidt, Tim
    INTERSPEECH 2023, 2023, : 1583 - 1587
  • [27] Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model
    Loh, Marco
    Schmid, Gabriele
    Deco, Gustavo
    Ziegler, Wolfram
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2010, 22 (02) : 240 - 247
  • [28] Automatic Bimodal Audiovisual Speech Recognition: A Review
    Kandagal, Amaresh P.
    Udayashankara, V.
    2014 INTERNATIONAL CONFERENCE ON CONTEMPORARY COMPUTING AND INFORMATICS (IC3I), 2014, : 940 - 945
  • [29] Bimodal variational autoencoder for audiovisual speech recognition
    Hadeer M. Sayed
    Hesham E. ElDeeb
    Shereen A. Taie
    Machine Learning, 2023, 112 : 1201 - 1226
  • [30] An ERP megastudy of Chinese word recognition
    Tsang, Yiu-Kei
    Zou, Yun
    PSYCHOPHYSIOLOGY, 2022, 59 (11)