Automatic speech recognition and training for severely dysarthric users of assistive technology: The STARDUST project

被引:32
|
作者
Parker, M
Cunningham, S
Enderby, P
Hawley, M
Green, P
机构
[1] Univ Sheffield, Dept Human Commun Sci, Sheffield S10 2TN, S Yorkshire, England
[2] Sheffield Speech & Language Therapy Agcy, Sheffield, S Yorkshire, England
[3] Univ Sheffield, Inst Gen Practice, Sheffield S10 2TN, S Yorkshire, England
[4] Barnsley Dist Gen Hosp NHS Trust, Dept Med Phys & Clin Engn, Barnsley, England
[5] Univ Sheffield, Dept Comp Sci, Sheffield S10 2TN, S Yorkshire, England
关键词
dysarthria; automatic speech recognition; articulation; assistive technology; speech training; treatment;
D O I
10.1080/02699200400026884
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
The STARDUST project developed robust computer speech recognizers for use by eight people with severe dysarthria and concomitant physical disability to access assistive technologies. Independent computer speech recognizers trained with normal speech are of limited functional use by those with severe dysarthria due to limited and inconsistent proximity to "normal'' articulatory patterns. Severe dysarthric output may also be characterized by a small mass of distinguishable phonetic tokens making the acoustic differentiation of target words difficult. Speaker dependent computer speech recognition using Hidden Markov Models was achieved by the identification of robust phonetic elements within the individual speaker output patterns. A new system of speech training using computer generated visual and auditory feedback reduced the inconsistent production of key phonetic tokens over time.
引用
收藏
页码:149 / 156
页数:8
相关论文
共 50 条
  • [41] SELECTED MILITARY APPLICATIONS OF AUTOMATIC SPEECH RECOGNITION TECHNOLOGY
    WOODARD, JP
    CUPPLES, EJ
    IEEE COMMUNICATIONS MAGAZINE, 1983, 21 (09) : 35 - 41
  • [42] Automatic Recognition System for Dysarthric Speech Based on MFCC's, PNCC's, JITTER and SHIMMER Coefficients
    Zaidi, Brahim-Fares
    Boudraa, Malika
    Selouani, Sid-Ahmed
    Addou, Djamel
    Yakoub, Mohammed Sidi
    ADVANCES IN COMPUTER VISION, VOL 2, 2020, 944 : 500 - 510
  • [43] Usage, performance, and satisfaction outcomes for experienced users of automatic speech recognition
    Koester, HH
    JOURNAL OF REHABILITATION RESEARCH AND DEVELOPMENT, 2004, 41 (05): : 739 - 754
  • [44] SELF-CRITICAL SEQUENCE TRAINING FOR AUTOMATIC SPEECH RECOGNITION
    Chen, Chen
    Hu, Yuchen
    Hou, Nana
    Qi, Xiaofeng
    Zou, Heqing
    Chng, Eng Siong
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3688 - 3692
  • [45] JOINT NOISE ADAPTIVE TRAINING FOR ROBUST AUTOMATIC SPEECH RECOGNITION
    Narayanan, Arun
    Wang, DeLiang
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [46] FedNST: Federated Noisy Student Training for Automatic Speech Recognition
    Mehmood, Haaris
    Dobrowolska, Agnieszka
    Saravanan, Karthikeyan
    Ozay, Mete
    INTERSPEECH 2022, 2022, : 1001 - 1005
  • [47] Study of the load balancing in the parallel training for automatic speech recognition
    Daoudi, E
    Manneback, P
    Meziane, A
    El Hadj, YOM
    EURO-PAR 2000 PARALLEL PROCESSING, PROCEEDINGS, 2000, 1900 : 506 - 510
  • [48] SENTIMENT-AWARE AUTOMATIC SPEECH RECOGNITION PRE-TRAINING FOR ENHANCED SPEECH EMOTION RECOGNITION
    Ghriss, Ayoub
    Yang, Bo
    Rozgic, Viktor
    Shriberg, Elizabeth
    Wang, Chao
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7347 - 7351
  • [49] Association of Speech Processor Technology and Speech Recognition Outcomes in Adult Cochlear Implant Users
    Dixon, Peter R.
    Shipp, David
    Smilsky, Kari
    Lin, Vincent Y.
    Le, Trung
    Chen, Joseph M.
    OTOLOGY & NEUROTOLOGY, 2019, 40 (05) : 595 - 601
  • [50] A Study on Model Training Strategies for Speaker-Independent and Vocabulary-Mismatched Dysarthric Speech Recognition
    Qi, Jinzi
    Van Hamme, Hugo
    APPLIED SCIENCES-BASEL, 2025, 15 (04):