Study of articulators’ contribution and compensation during speech by articulatory speech recognition

被引:0
|
作者
Jianguo Wei
Yan Ji
Jingshu Zhang
Qiang Fang
Wenhuan Lu
Kiyoshi Honda
Xugang Lu
机构
[1] Tianjin University,School of Computer Software
[2] Tianjin University,School of Computer Science and Technology
[3] Chinese Academy of Social Sciences,undefined
[4] NICT,undefined
来源
关键词
DNN; Articulatory recognition; Articulators’ contribution; Crucial level; Compensation;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper, the contributions of dynamic articulatory information were evaluated by using an articulatory speech recognition system. The Electromagnetic Articulographic dataset is relatively small and hard to be recorded compared with popular speech corpora used for modern speech study. We used articulatory data to study the contribution of each observation channel of vocal tracts in speech recognition by DNN framework. We also analyzed the recognition results of each phoneme according to speech production rules. The contribution rate of each articulator can be considered as the crucial level of each phoneme in speech production. Furthermore, the results indicate that the contribution of each observation point is not relevant to a specific method. The tendency of a contribution of each sensor is identical to the rules of Japanese phonology. In this work, we also evaluated the compensation effect between different channels. We discovered that crucial points are hard to be compensated for compared with non-crucial points. The proposed method can help us identify the crucial points of each phoneme during speech. The results of this paper can contribute to the study of speech production and articulatory-based speech recognition.
引用
收藏
页码:18849 / 18864
页数:15
相关论文
共 50 条
  • [1] Study of articulators' contribution and compensation during speech by articulatory speech recognition
    Wei, Jianguo
    Ji, Yan
    Zhang, Jingshu
    Fang, Qiang
    Lu, Wenhuan
    Honda, Kiyoshi
    Lu, Xugang
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (14) : 18849 - 18864
  • [2] A STUDY ON ROBUSTNESS OF ARTICULATORY FEATURES FOR AUTOMATIC SPEECH RECOGNITION OF NEUTRAL AND WHISPERED SPEECH
    Srinivasan, Gokul
    Illa, Aravind
    Ghosh, Prasanta Kumar
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 5936 - 5940
  • [3] On the Contribution of Articulatory Features to Speech Synthesis
    Matura, Martin
    Juzova, Marketa
    Matousek, Jindrich
    [J]. SPEECH AND COMPUTER (SPECOM 2018), 2018, 11096 : 398 - 407
  • [4] Recognizing articulatory gestures from speech for robust speech recognition
    Mitra, Vikramjit
    Nam, Hosung
    Espy-Wilson, Carol
    Saltzman, Elliot
    Goldstein, Louis
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2012, 131 (03): : 2270 - 2287
  • [5] Articulatory Features for "Meeting" Speech Recognition
    Metze, Florian
    [J]. INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, 2006, : 581 - 584
  • [6] Articulatory Knowledge in the Recognition of Dysarthric Speech
    Rudzicz, Frank
    [J]. IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2011, 19 (04): : 947 - 960
  • [7] Using articulatory likelihoods in the recognition of dysarthric speech
    Rudzicz, Frank
    [J]. SPEECH COMMUNICATION, 2012, 54 (03) : 430 - 444
  • [8] Automatic Speech Recognition Experiments with Articulatory Data
    Uraga, Esmeralda
    Hain, Thomas
    [J]. INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, 2006, : 353 - 356
  • [9] Articulatory Information for Noise Robust Speech Recognition
    Mitra, Vikramjit
    Nam, Hosung
    Espy-Wilson, Carol Y.
    Saltzman, Elliot
    Goldstein, Louis
    [J]. IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2011, 19 (07): : 1913 - 1924
  • [10] Speech recognition using cepstral articulatory features
    Najnin, Shamima
    Banerjee, Bonny
    [J]. SPEECH COMMUNICATION, 2019, 107 : 26 - 37