End-to-End Speech Synthesis for Tibetan Multidialect

被引:2
|
作者
Xu, Xiaona [1 ]
Yang, Li [1 ]
Zhao, Yue [1 ]
Wang, Hui [1 ]
机构
[1] Minzu Univ China, Sch Informat Engn, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
All Open Access; Gold;
D O I
10.1155/2021/6682871
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The research on Tibetan speech synthesis technology has been mainly focusing on single dialect, and thus there is a lack of research on Tibetan multidialect speech synthesis technology. This paper presents an end-to-end Tibetan multidialect speech synthesis model to realize a speech synthesis system which can be used to synthesize different Tibetan dialects. Firstly, Wylie transliteration scheme is used to convert the Tibetan text into the corresponding Latin letters, which effectively reduces the size of training corpus and the workload of front-end text processing. Secondly, a shared feature prediction network with a cyclic sequence-to-sequence structure is built, which maps the Latin transliteration vector of Tibetan character to Mel spectrograms and learns the relevant features of multidialect speech data. Thirdly, two dialect-specific WaveNet vocoders are combined with the feature prediction network, which synthesizes the Mel spectrum of Lhasa-u-Tsang and Amdo pastoral dialect into time-domain waveform, respectively. The model avoids using a large number of Tibetan dialect expertise for processing some time-consuming tasks, such as phonetic analysis and phonological annotation. Additionally, it can directly synthesize Lhasa-u-Tsang and Amdo pastoral speech on the existing text annotation. The experimental results show that the synthesized speech of Lhasa-u-Tsang and Amdo pastoral dialect based on our proposed method has better clarity and naturalness than the Tibetan monolingual model.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] CONTROLLING EMOTION STRENGTH WITH RELATIVE ATTRIBUTE FOR END-TO-END SPEECH SYNTHESIS
    Zhu, Xiaolian
    Yang, Shan
    Yang, Geng
    Xie, Lei
    [J]. 2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), 2019, : 192 - 199
  • [32] EXPLORING END-TO-END NEURAL TEXT-TO-SPEECH SYNTHESIS FOR ROMANIAN
    Dumitrache, Marius
    Rebedea, Traian
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE LINGUISTIC RESOURCES AND TOOLS FOR NATURAL LANGUAGE PROCESSING, 2020, : 93 - 102
  • [33] Exploiting Deep Sentential Context for Expressive End-to-End Speech Synthesis
    Yang, Fengyu
    Yang, Shan
    Wu, Qinghua
    Wang, Yujun
    Xie, Lei
    [J]. INTERSPEECH 2020, 2020, : 3436 - 3440
  • [34] Myanmar Text-to-Speech Synthesis Using End-to-End Model
    Qin, Qinglai
    Yang, Jian
    Li, Peiying
    [J]. 2020 4TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL, NLPIR 2020, 2020, : 6 - 11
  • [35] A Phoneme Sequence Driven Lightweight End-To-End Speech Synthesis Approach
    Jiang, Zite
    Qin, Feiwei
    Zhao, Liaoying
    [J]. 2019 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, AUTOMATION AND CONTROL TECHNOLOGIES (AIACT 2019), 2019, 1267
  • [36] An Efficient and High Fidelity Vietnamese Streaming End-to-End Speech Synthesis
    Tho Tran
    The Chuong Chu
    Hoang Vu
    Trung Bui
    Truong, Steven Q. H.
    [J]. INTERSPEECH 2022, 2022, : 466 - 470
  • [37] Efficient decoding self-attention for end-to-end speech synthesis
    Zhao, Wei
    Xu, Li
    [J]. FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2022, 23 (07) : 1127 - 1138
  • [38] End-to-End Speech Translation with Adversarial Training
    Li, Xuancai
    Chen, Kehai
    Zhao, Tiejun
    Yang, Muyun
    [J]. WORKSHOP ON AUTOMATIC SIMULTANEOUS TRANSLATION CHALLENGES, RECENT ADVANCES, AND FUTURE DIRECTIONS, 2020, : 10 - 14
  • [39] An End-to-End model for Vietnamese speech recognition
    Van Huy Nguyen
    [J]. 2019 IEEE - RIVF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION TECHNOLOGIES (RIVF), 2019, : 307 - 312
  • [40] FILTERBANK DESIGN FOR END-TO-END SPEECH SEPARATION
    Pariente, Manuel
    Cornell, Samuele
    Deleforge, Antoine
    Vincent, Emmanuel
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6364 - 6368