Auditory perceptual learning for speech perception can be enhanced by audiovisual training

被引:47
|
作者
Bernstein, Lynne E. [1 ]
Auer, Edward T., Jr. [1 ]
Eberhardt, Silvio P. [1 ]
Jiang, Jintao [1 ]
机构
[1] George Washington Univ, Dept Speech & Hearing Sci, Commun Neurosci Lab, Washington, DC 20052 USA
关键词
audiovisual speech processing; audiovisual speech perception; perceptual learning; reverse hierarchy theory; auditory perception; visual speech perception; multisensory processing; plasticity and learning; SUPERIOR TEMPORAL SULCUS; VOICE FUNDAMENTAL-FREQUENCY; VISUAL INFORMATION; MISMATCH NEGATIVITY; WORD-RECOGNITION; NORMAL-HEARING; SEEING VOICES; INTEGRATION; MCGURK; NOISE;
D O I
10.3389/fnins.2013.00034
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Brief Periods of Auditory Perceptual Training Can Determine the Sensory Targets of Speech Motor Learning
    Lametti, Daniel R.
    Krol, Sonia A.
    Shiller, Douglas M.
    Ostry, David J.
    [J]. PSYCHOLOGICAL SCIENCE, 2014, 25 (07) : 1325 - 1336
  • [2] Training with an auditory perceptual learning game transfers to speech in competition
    de Larrea-Mancera, E. Sebastian Lelo
    Philipp, Mark A.
    Stavropoulos, Trevor
    Carrillo, Audrey Anna
    Cheung, Sierra
    Koerner, Tess K.
    Molis, Michelle R.
    Gallun, Frederick J.
    Seitz, Aaron R.
    [J]. JOURNAL OF COGNITIVE ENHANCEMENT, 2022, 6 (01) : 47 - 66
  • [3] Training with an auditory perceptual learning game transfers to speech in competition
    E. Sebastian Lelo de Larrea-Mancera
    Mark A. Philipp
    Trevor Stavropoulos
    Audrey Anna Carrillo
    Sierra Cheung
    Tess K. Koerner
    Michelle R. Molis
    Frederick J. Gallun
    Aaron R. Seitz
    [J]. Journal of Cognitive Enhancement, 2022, 6 : 47 - 66
  • [4] Speech in noise perception improved by training fine auditory discrimination: far and applicable transfer of perceptual learning
    Xiang Gao
    Tingting Yan
    Ting Huang
    Xiaoli Li
    Yu-Xuan Zhang
    [J]. Scientific Reports, 10
  • [5] Speech in noise perception improved by training fine auditory discrimination: far and applicable transfer of perceptual learning
    Gao, Xiang
    Yan, Tingting
    Huang, Ting
    Li, Xiaoli
    Zhang, Yu-Xuan
    [J]. SCIENTIFIC REPORTS, 2020, 10 (01)
  • [6] Can a Commercially Available Auditory Training Program Improve Audiovisual Speech Performance?
    Rishiq, Dania
    Rao, Aparna
    Koerner, Tess
    Abrams, Harvey
    [J]. AMERICAN JOURNAL OF AUDIOLOGY, 2016, 25 (03) : 308 - 312
  • [7] Silent articulation modulates auditory and audiovisual speech perception
    Sato, Marc
    Troille, Emilie
    Menard, Lucie
    Cathiard, Marie-Agnes
    Gracco, Vincent
    [J]. EXPERIMENTAL BRAIN RESEARCH, 2013, 227 (02) : 275 - 288
  • [8] Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech
    Garcia-Perez, Miguel A.
    Alcala-Quintana, Rocio
    [J]. I-PERCEPTION, 2015, 6 (06): : 1 - 20
  • [9] Silent articulation modulates auditory and audiovisual speech perception
    Marc Sato
    Emilie Troille
    Lucie Ménard
    Marie-Agnès Cathiard
    Vincent Gracco
    [J]. Experimental Brain Research, 2013, 227 : 275 - 288
  • [10] Audiovisual speech perception development at varying levels of perceptual processing
    Lalonde, Kaylah
    Holt, Rachael Frush
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2016, 139 (04): : 1713 - 1723