Linguistic experience and audio-visual perception of non-native fricatives

被引:37
|
作者
Wang, Yue [1 ]
Behne, Dawn M. [2 ]
Jiang, Haisheng [3 ]
机构
[1] Simon Fraser Univ, Dept Linguist, Burnaby, BC V5A 1S6, Canada
[2] Norwegian Univ Sci & Technol, Dept Psychol, N-7491 Trondheim, Norway
[3] Simon Fraser Univ, Dept Linguist, Burnaby, BC V5A 1S6, Canada
来源
关键词
D O I
10.1121/1.2956483
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This study examined the effects of linguistic experience on audio-visual (AV) perception of non-native (L2) speech. Canadian English natives and Mandarin Chinese natives differing in degree of English exposure (long and short length of residence (LOR) in Canada] were presented with English fricatives of three visually distinct places of articulation: interdentals nonexistent in Mandarin and labiodentals and alveolars common in both languages. Stimuli were presented in quiet and in a cafe-noise background in four ways: audio only (A), visual only (V), congruent AV (AVc), and incongruent AV (AVi). Identification results showed that overall performance was better in the AVc than in the A or V condition and better in quiet than in cafe noise. While the Mandarin long LOR group approximated the native English patterns, the short LOR group showed poorer interdental identification, more reliance on visual information, and greater AV-fusion with the AVi materials, indicating the failure of L2 visual speech category formation with the short LOR non-natives and the positive effects of linguistic experience with the long LOR non-natives. These results point to an integrated network in AV speech processing as a function of linguistic background and provide evidence to extend auditory-based L2 speech learning theories to the visual domain. (C) 2008 Acoustical Society of America.
引用
收藏
页码:1716 / 1726
页数:11
相关论文
共 50 条
  • [41] On the production and the perception of audio-visual speech by man and machine
    Benoit, C
    MULTIMEDIA COMMUNICATIONS AND VIDEO CODING, 1996, : 277 - 284
  • [42] Audio-visual saliency prediction with multisensory perception and integration
    Xie, Jiawei
    Liu, Zhi
    Li, Gongyang
    Song, Yingjie
    IMAGE AND VISION COMPUTING, 2024, 143
  • [43] Audio-visual speech perception without speech cues
    Saldana, HM
    Pisoni, DB
    Fellowes, JM
    Remez, RE
    ICSLP 96 - FOURTH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, VOLS 1-4, 1996, : 2187 - 2190
  • [44] Effects of stimulus duration on audio-visual synchrony perception
    I. A. Kuling
    R. L. J. van Eijk
    J. F. Juola
    A. Kohlrausch
    Experimental Brain Research, 2012, 221 : 403 - 412
  • [45] Audio-Visual Perception System for a Humanoid Robotic Head
    Viciana-Abad, Raquel
    Marfil, Rebeca
    Perez-Lorenzo, Jose M.
    Bandera, Juan P.
    Romero-Garces, Adrian
    Reche-Lopez, Pedro
    SENSORS, 2014, 14 (06) : 9522 - 9545
  • [46] Audio-Visual Prosody: Perception, Detection, and Synthesis of Prominence
    Al Moubayed, Samer
    Beskow, Jonas
    Granstrom, Bjorn
    House, David
    TOWARD AUTONOMOUS, ADAPTIVE, AND CONTEXT-AWARE MULTIMODAL INTERFACES: THEORETICAL AND PRACTICAL ISSUES, 2011, 6456 : 55 - 71
  • [47] Effects of stimulus duration on audio-visual synchrony perception
    Kuling, I. A.
    van Eijk, R. L. J.
    Juola, J. F.
    Kohlrausch, A.
    EXPERIMENTAL BRAIN RESEARCH, 2012, 221 (04) : 403 - 412
  • [48] Audio-Visual Predictive Processing in the Perception of Humans and Robots
    Sarigul, Busra
    Urgen, Burcu A.
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2023, 15 (05) : 855 - 865
  • [49] Audio-visual Human Tracking for Active Robot Perception
    Bayram, Baris
    Ince, Gokhan
    2015 23RD SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2015, : 1264 - 1267
  • [50] Audio-visual temporal perception in children with restored hearing
    Gori, Monica
    Chilosi, Anna
    Forli, Francesca
    Burr, David
    NEUROPSYCHOLOGIA, 2017, 99 : 350 - 359