Deep Learning Speech Synthesis Model for Word/Character-Level Recognition in the Tamil Language

被引:0
|
作者
Rajendran, Sukumar [1 ]
Raja, Kiruba Thangam [2 ]
Nagarajan, G. [3 ]
Dass, A. Stephen [2 ]
Kumar, M. Sandeep [2 ]
Jayagopal, Prabhu [2 ]
机构
[1] VIT Bhopal Univ, Sch Comp Sci & Engn, Indore Highway Kothrikalan, Bhopal, India
[2] Vellore Inst Technol, Sch Informat Technol & Engn, Vellore, India
[3] Panimalar Engn Coll, Dept Math, Chennai, India
关键词
Deep Learning; Language; Modeling; Tamil Speech; Visualization;
D O I
10.4018/IJeC.316824
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As electronics and the increasing popularity of social media are widely used, a large amount of text data is created at unprecedented rates. All data created cannot be read by humans, and what they discuss in their sphere of interest may be found. Modeling of themes is a way to identify subjects in a vast number of texts. There has been a lot of study on subject-modeling in English. At the same time, millions of people worldwide speak Tamil; there is no great development in resource-scarce languages such as Tamil being spoken by millions of people worldwide. The consequences of specific deep learning models are usually difficult to interpret for the typical user. They are utilizing various visualization techniques to represent the outcomes of deep learning in a meaningful way. Then, they use metrics like similarity, correlation, perplexity, and coherence to evaluate the deep learning models.
引用
收藏
页码:20 / 20
页数:1
相关论文
共 50 条
  • [41] Deep Learning Model for Tamil Part-of-Speech Tagging
    Visuwalingam, Hemakasiny
    Sakuntharaj, Ratnasingam
    Alawatugoda, Janaka
    Ragel, Roshan
    COMPUTER JOURNAL, 2024, 67 (08): : 2633 - 2642
  • [42] Character-Level Dependency Model for Joint Word Segmentation, POS Tagging, and Dependency Parsing in Chinese
    Guo, Zhen
    Zhang, Yujie
    Su, Chen
    Xu, Jinan
    Isahara, Hitoshi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2016, E99D (01): : 257 - 264
  • [43] CHARACTER-LEVEL LANGUAGE MODELING WITH HIERARCHICAL RECURRENT NEURAL NETWORKS
    Hwang, Kyuyeon
    Sung, Wonyong
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5720 - 5724
  • [44] A word language model based contextual language processing on Chinese character recognition
    Huang, Chen
    Ding, Xiaoqing
    Chen, Yan
    DOCUMENT RECOGNITION AND RETRIEVAL XVII, 2010, 7534
  • [45] Character-Level Language Modeling with Deeper Self-Attention
    Al-Rfou, Rami
    Choe, Dokook
    Constant, Noah
    Guo, Mandy
    Jones, Llion
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3159 - 3166
  • [46] BiLSTM-CRF Manipuri NER with Character-Level Word Representation
    Jimmy, Laishram
    Nongmeikappam, Kishorjit
    Naskar, Sudip Kumar
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2023, 48 (02) : 1715 - 1734
  • [47] Parameter-Efficient Korean Character-Level Language Modeling
    Cognetta, Marco
    Wolf-Sonkin, Lawrence
    Moon, Sangwhan
    Okazaki, Naoaki
    EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference, 2023, : 2342 - 2348
  • [48] BiLSTM-CRF Manipuri NER with Character-Level Word Representation
    Laishram Jimmy
    Kishorjit Nongmeikappam
    Sudip Kumar Naskar
    Arabian Journal for Science and Engineering, 2023, 48 : 1715 - 1734
  • [49] A Study on Dialog Act Recognition Using Character-Level Tokenization
    Ribeiro, Eugenio
    Ribeiro, Ricardo
    de Matos, David Martins
    ARTIFICIAL INTELLIGENCE: METHODOLOGY, SYSTEMS, AND APPLICATIONS, AIMSA 2018, 2018, 11089 : 93 - 103
  • [50] Parameter-Efficient Korean Character-Level Language Modeling
    Cognetta, Marco
    Moon, Sangwhan
    Wolf-Sonkin, Lawrence
    Okazaki, Naoaki
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2350 - 2356