A Unified Framework to Improve Learners' Skills of Perception and Production Based on Speech Shadowing and Overlapping

被引:0
|
作者
Minematsu, Nobuaki [1 ]
Nakanishi, Noriko [2 ]
Gao, Yingxiang [1 ]
Sun, Haitong [1 ]
机构
[1] Univ Tokyo, Grad Sch Engn, Tokyo, Japan
[2] Kobe Gakuin Univ, Fac Global Commun, Kobe, Hyogo, Japan
来源
关键词
language learning; perception and production; speech shadowing; utterance comparison; speech game;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
A unified framework to improve learners' skills in perceiving and producing L2 sounds is demonstrated based on speech shadowing and overlapping. Speech shadowing is a training method, where learners are asked to reproduce a given model speech (M) as immediately as possible, and it was proved to be effective in enhancing their L2 speech perception. After several trials of shadowing, the learners are provided with M's script to continue shadowing with no delay, called overlapping. By comparing the shadowing speech (S) and the script-shadowing speech (SS), shadowing breakdowns are measured sequentially, which can characterize listening breakdowns. By comparing M and SS, the prosodic and segmental gaps are analyzed sequentially and presented visually to learners along with imitation scores. All the tasks are implemented as interactive speech games, which help learners to become more proficient in L2 speech perception and production.
引用
收藏
页码:3667 / 3668
页数:2
相关论文
共 50 条
  • [31] UMC: A Unified Bandwidth-efficient and Multi-resolution based Collaborative Perception Framework
    Wang, Tianhang
    Chen, Guang
    Chen, Kai
    Liu, Zhengfa
    Zhang, Bo
    Knoll, Alois
    Jiang, Changjun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 8153 - 8162
  • [32] Investigation of relation between speech perception and production based on EEG source reconstruction
    Li, Guancheng
    Dang, Jianwu
    Zhang, Gaoyan
    Liu, Zhilei
    Wang, Hongcui
    2015 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2015, : 499 - 504
  • [33] Dimension-Based Statistical Learning Affects Both Speech Perception and Production
    Lehet, Matthew
    Holt, Lori L.
    COGNITIVE SCIENCE, 2017, 41 : 885 - 912
  • [34] The Acquisitional Value of Recasts in Instructed Second Language Speech Learning: Teaching the Perception and Production of English /r/ to Adult Japanese Learners
    Saito, Kazuya
    LANGUAGE LEARNING, 2013, 63 (03) : 499 - 529
  • [35] Can simultaneous reading and listening improve speech perception and production? An examination of recent feedback on the SWANS authoring system
    Stenton, Anthony
    LANGUAGES, CULTURES AND VIRTUAL COMMUNITIES, 2012, 34 : 219 - 225
  • [36] A unified framework for the generation of glottal signals in deep learning-based parametric speech synthesis systems
    Hwang, Min-Jae
    Song, Eunwoo
    Kim, Jin-Seob
    Kang, Hong-Goo
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 912 - 916
  • [37] l Not Just a "Bu": Perception and Production of Chinese-as-Foreign- Language (CFL) learners' Face-threatening Speech Acts of Refusal
    Zhang, Weidong
    JOURNAL OF LANGUAGE TEACHING AND LEARNING, 2012, 2 (02): : 51 - 74
  • [38] Podcast-based pronunciation training: Enhancing FL learners' perception and production of fossilised segmental features
    Fouz-Gonzalez, Jonas
    RECALL, 2019, 31 (02) : 150 - 169
  • [39] A syntactic-based approach to the perception and production of English verbs' argument structures by Iranian EFL learners
    Akbarnezhad, Shima
    Sadighi, Firooz
    Bagheri, Mohammad Sadegh
    COGENT ARTS & HUMANITIES, 2020, 7 (01):
  • [40] INTEGRATING HTML']HTML5-BASED SPEECH RECOGNITION WITH LEARNING MANAGEMENT SYSTEM TO ENHANCE EFL LEARNERS' PRONUNCIATION SKILLS
    Al Aufi, Asma
    Naqvi, Samia
    Naidu, Vikas Rao
    Al Homani, Yusra
    JOURNAL OF TEACHING ENGLISH FOR SPECIFIC AND ACADEMIC PURPOSES, 2023, 11 (02): : 507 - 520