An optimized machine translation technique for multi-lingual speech to sign language notation

被引:0
|
作者
Amandeep Singh Dhanjal
Williamjeet Singh
机构
[1] Punjabi University,Department of Computer Science
[2] Punjabi University,Department of Computer Science and Engineering
来源
关键词
Speech to sign language; Direct translation; Machine learning; HamNoSys;
D O I
暂无
中图分类号
学科分类号
摘要
Due to the lack of assistive resources, hard-of-hearing people cannot live independently. Sign language or gesture language is the natural language and it is the primary mode of communication for hard-of-hearing people. Researchers and IT companies are continuously trying to find the best solutions to minimize the communication barriers for hearing-impaired people. Existing translation techniques for speech to sign language on the web platform are consuming higher resources. This study presents an optimized technique for direct machine translation of multi-lingual speech to Indian sign language using the HamNoSys notation system, whereas existing techniques were translating speech-text-HamNoSys. Performance comparison of both existing and the proposed techniques is analyzed in this study. The proposed technique optimizes the resources for the following parameters: CPU, heap memory, primary memory, and classes load. The result shows that the existing technique takes 220 MB heap memory, 10 threads, 2236 classes, and CPU for 12 s. The proposed technique consumes only 210.4 MB, 9 threads, 2113 classes, and CPU for 9 s.
引用
收藏
页码:24099 / 24117
页数:18
相关论文
共 50 条
  • [41] A review on multi-lingual sentiment analysis by machine learning methods
    Sagnika S.
    Pattanaik A.
    Mishra B.S.P.
    Meher S.K.
    Journal of Engineering Science and Technology Review, 2020, 13 (02) : 154 - 166
  • [42] Development of multi-lingual speech recognition and text-to speech synthesis for automotive applications
    Deguchi, Y.
    Kagoshima, T.
    Hirabayashi, G.
    Kanazawa, H.
    Hogenhout, M.
    VDI Berichte, 2003, (1789): : 3081 - 3088
  • [43] Development of multi-lingual speech recognition and text-to-speech synthesis for automotive applications
    Deguchi, Y.
    Kagoshima, T.
    Hirabayashi, G.
    Kanazawa, H.
    VDI Berichte, 2002, (1728): : 233 - 240
  • [44] Development of multi-lingual speech recognition and text-to-speech synthesis for automotive applications
    Deguchi, Y
    Kagoshima, T
    Hirabayashi, G
    Kanazawa, H
    TELEMATCS FOR VEHICLES, 2002, 1728 : 233 - 240
  • [45] Development of multi-lingual speech recognition and text-to speech synthesis for automotive applications
    Deguchi, Y
    Kagoshima, T
    Hirabayashi, G
    Kanazawa, H
    Hogenhout, M
    ELECTRONIC SYSTEMS FOR VEHICLES, 2003, 1789 : 1167 - 1174
  • [46] SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION
    Dalmia, Siddharth
    Sanabria, Ramon
    Metze, Florian
    Black, Alan W.
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 4909 - 4913
  • [47] Mono- and multi-lingual depression prediction based on speech processing
    Kiss G.
    Vicsi K.
    International Journal of Speech Technology, 2017, 20 (04) : 919 - 935
  • [48] The paradigm for creating multi-lingual text-to-speech voice databases
    Chu, Min
    Zhao, Yong
    Chen, Yining
    Wang, Lijuan
    Soong, Frank
    CHINESE SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, 2006, 4274 : 736 - +
  • [49] A multi-lingual speech recognition system using a neural network approach
    Chen, OTC
    Chen, CY
    Chang, HT
    Hsu, FR
    Yang, HL
    Lee, YG
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 1576 - 1581
  • [50] CROSS-LINGUAL CONTEXT SHARING AND PARAMETER-TYING FOR MULTI-LINGUAL SPEECH RECOGNITION
    Mohan, Aanchan
    Rose, Richard
    2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), 2013, : 126 - 131