Enhancing Word-Level Semantic Representation via Dependency Structure for Expressive Text-to-Speech Synthesis

被引:2
|
作者
Zhou, Yixuan [1 ,4 ]
Song, Changhe [1 ]
Li, Jingbei [1 ]
Wu, Zhiyong [1 ,2 ]
Bian, Yanyao [3 ]
Su, Dan [3 ]
Meng, Helen [2 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[3] Tencent, Tencent AI Lab, Shenzhen, Peoples R China
[4] Tencent, Shenzhen, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
expressive speech synthesis; semantic representation enhancing; dependency parsing; graph neural network;
D O I
10.21437/Interspeech.2022-10061
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Exploiting rich linguistic information in raw text is crucial for expressive text-to-speech (TTS). As large scale pre-trained text representation develops, bidirectional encoder representations from Transformers (BERT) has been proven to embody semantic information and employed to TTS recently. However, original or simply fine-tuned BERT embeddings still cannot provide sufficient semantic knowledge that expressive TTS models should take into account. In this paper, we propose a word-level semantic representation enhancing method based on dependency structure and pre-trained BERT embedding. The BERT embedding of each word is reprocessed considering its specific dependencies and related words in the sentence, to generate more effective semantic representation for TTS. To better utilize the dependency structure, relational gated graph network (RGGN) is introduced to make semantic information flow and aggregate through the dependency structure. The experimental results show that the proposed method can further improve the naturalness and expressiveness of synthesized speeches on both Mandarin and English datasets(1).
引用
收藏
页码:5518 / 5522
页数:5
相关论文
共 47 条
  • [41] Integrating Discrete Word-Level Style Variations into Non-Autoregressive Acoustic Models for Speech Synthesis
    Liu, Zhaoci
    Wu, Ningqian
    Zhang, Yajie
    Ling, Zhenhua
    INTERSPEECH 2022, 2022, : 5508 - 5512
  • [42] Prosody Aware Word-level Encoder Based on BLSTM-RNNs for DNN-based Speech Synthesis
    Ijima, Yusuke
    Hojo, Nobukatsu
    Masumura, Ryo
    Asami, Taichi
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 764 - 768
  • [43] Emotion-controllable Speech Synthesis Using Emotion Soft Label, Utterance-level Prosodic Factors, and Word-level Prominence
    Luo, Xuan
    Takamichi, Shinnosuke
    Saito, Yuki
    Koriyama, Tomoki
    Saruwatari, Hiroshi
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2024, 13 (01)
  • [44] Fine-grained Style Modeling, Transfer and Prediction in Text-to-Speech Synthesis via Phone-Level Content-Style Disentanglement
    Tan, Daxin
    Lee, Tan
    INTERSPEECH 2021, 2021, : 4683 - 4687
  • [45] ENHANCING SPEAKING STYLES IN CONVERSATIONAL TEXT-TO-SPEECH SYNTHESIS WITH GRAPH-BASED MULTI-MODAL CONTEXT MODELING
    Li, Jingbei
    Meng, Yi
    Li, Chenyi
    Wu, Zhiyong
    Meng, Helen
    Weng, Chao
    Su, Dan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7917 - 7921
  • [46] Cross-lingual Text-To-Speech Synthesis via Domain Adaptation and Perceptual Similarity Regression in Speaker Space
    Xin, Detai
    Saito, Yuki
    Takamichi, Shinnosuke
    Koriyama, Tomoki
    Saruwatari, Hiroshi
    INTERSPEECH 2020, 2020, : 2947 - 2951
  • [47] A Universal Multi-Speaker Multi-Style Text-to-Speech via Disentangled Representation Learning based on Renyi Divergence Minimization
    Paul, Dipjyoti
    Mukherjee, Sankar
    Pantazis, Yannis
    Stylianou, Yannis
    INTERSPEECH 2021, 2021, : 3625 - 3629