Discriminating Non-native Vowels on the Basis of Multimodal, Auditory or Visual Information: Effects on Infants' Looking Patterns and Discrimination

被引:20
|
作者
Ter Schure, Sophie [1 ]
Junge, Caroline [2 ]
Boersma, Paul [1 ]
机构
[1] Univ Amsterdam, Linguist, Amsterdam, Netherlands
[2] Univ Utrecht, Expt Psychol, Utrecht, Netherlands
来源
FRONTIERS IN PSYCHOLOGY | 2016年 / 7卷
关键词
audiovisual speech integration; distributional learning; multimodal perception; infants; non-native phonemes; gaze locations; intersensory redundancy hypothesis; language acquisition; DEVELOPMENTAL-CHANGES; SELECTIVE ATTENTION; PHONETIC PERCEPTION; SPEECH-PERCEPTION; TALKING FACE; LANGUAGE; ADULTS; BILINGUALISM; FACILITATION; EXPERIENCE;
D O I
10.3389/fpsyg.2016.00525
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Infants' perception of speech sound contrasts is modulated by their language environment, for example by the statistical distributions of the speech sounds they hear. Infants learn to discriminate speech sounds better when their input contains a two-peaked frequency distribution of those speech sounds than when their input contains a one-peaked frequency distribution. Effects of frequency distributions on phonetic learning have been tested almost exclusively for auditory input. But auditory speech is usually accompanied by visual information, that is, by visible articulations. This study tested whether infants' phonological perception is shaped by distributions of visual speech as well as by distributions of auditory speech, by comparing learning from multimodal (i.e., auditory visual), visual-only, or auditory-only information. Dutch 8-month-old infants were exposed to either a one-peaked or two-peaked distribution from a continuum of vowels that formed a contrast in English, but not in Dutch. We used eye tracking to measure effects of distribution and sensory modality on infants' discrimination of the contrast. Although there were no overall effects of distribution or modality, separate t-tests in each of the six training conditions demonstrated significant discrimination of the vowel contrast in the two-peaked multimodal condition. For the modalities where the mouth was visible (visual-only and multimodal) we further examined infant looking patterns for the dynamic speaker's face. Infants in the two-peaked multimodal condition looked longer at her mouth than infants in any of the three other conditions. We propose that by 8 months, infants' native vowel categories are established insofar that learning a novel contrast is supported by attention to additional information, such as visual articulations.
引用
收藏
页数:11
相关论文
共 3 条
  • [1] The Effect of Visual Articulatory Information on the Neural Correlates of Non-native Speech Sound Discrimination
    Plumridge, James M. A.
    Barham, Michael P.
    Foley, Denise L.
    Ware, Anna T.
    Clark, Gillian M.
    Albein-Urios, Natalia
    Hayden, Melissa J.
    Lum, Jarrad A. G.
    FRONTIERS IN HUMAN NEUROSCIENCE, 2020, 14
  • [2] Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language
    Deng, Xizi
    Mcclay, Elise
    Jastrzebski, Erin
    Wang, Yue
    Yeung, H. Henny
    PLOS ONE, 2024, 19 (05):
  • [3] Training Children to Perceive Non-native Lexical Tones: Tone Language Background, Bilingualism, and Auditory-Visual Information
    Kasisopa, Benjawan
    Antonios, Lamya El-Khoury
    Jongman, Allard
    Sereno, Joan A.
    Burnham, Denis
    FRONTIERS IN PSYCHOLOGY, 2018, 9