PromptStyle: Controllable Style Transfer for Text-to-Speech with Natural Language Descriptions

被引:4
|
作者
Liu, Guanghou [1 ]
Zhang, Yongmao [1 ]
Lei, Yi [1 ]
Chen, Yunlin [2 ]
Wang, Rui [2 ]
Li, Zhifei [2 ]
Xie, Lei [1 ]
机构
[1] Northwestern Polytech Univ, Audio Speech & Language Proc Grp ASLP NPU, Sch Comp Sci, Xian, Peoples R China
[2] Shanghai Mobvoi Informat Technol Co Ltd, Shanghai, Peoples R China
来源
关键词
text-to-speech; style transfer; style prompt;
D O I
10.21437/Interspeech.2023-1779
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Style transfer TTS has shown impressive performance in recent years. However, style control is often restricted to systems built on expressive speech recordings with discrete style categories. In practical situations, users may be interested in transferring style by typing text descriptions of desired styles, without the reference speech in the target style. The text-guided content generation techniques have drawn wide attention recently. In this work, we explore the possibility of controllable style transfer with natural language descriptions. To this end, we propose PromptStyle, a text prompt-guided cross-speaker style transfer system. Specifically, PromptStyle consists of an improved VITS and a cross-modal style encoder. The cross-modal style encoder constructs a shared space of stylistic and semantic representation through a two-stage training process. Experiments show that PromptStyle can achieve proper style transfer with text prompts while maintaining relatively high stability and speaker similarity. Audio samples are available in our demo page(1).
引用
收藏
页码:4888 / 4892
页数:5
相关论文
共 50 条
  • [41] Decoding Knowledge Transfer for Neural Text-to-Speech Training
    Liu, Rui
    Sisman, Berrak
    Gao, Guanglai
    Li, Haizhou
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1789 - 1802
  • [42] Expressive Text-to-Speech Synthesis using Text Chat Dataset with Speaking Style Information
    Homma Y.
    Kanagawa H.
    Kobayashi N.
    Ijima Y.
    Saito K.
    Transactions of the Japanese Society for Artificial Intelligence, 2023, 38 (03)
  • [43] ZET-Speech: Zero-shot adaptive Emotion-controllable Text-to-Speech Synthesis with Diffusion and Style-based Models
    Kang, Minki
    Han, Wooseok
    Hwang, Sung Ju
    Yang, Eunho
    INTERSPEECH 2023, 2023, : 4339 - 4343
  • [44] Text-to-Speech Conversion Using Concatenative Approach for Gujarati Language
    Narvani, Vishal
    Arolkar, Harshal
    SMART TRENDS IN COMPUTING AND COMMUNICATIONS, VOL 5, SMARTCOM 2024, 2024, 949 : 183 - 193
  • [45] Design of a Yoruba Language Speech Corpus for the Purposes of Text-to-Speech (TTS) Synthesis
    Dagba, Theophile K.
    Aoga, John O. R.
    Fanou, Codjo C.
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2016, PT I, 2016, 9621 : 161 - 169
  • [46] Subjective evaluation and comparison of the speech quality of text-to-speech systems for the German language
    Klaus, H.
    Fellbaum, K.
    Sotscheck, J.
    Acta Acustica (Stuttgart), 1997, 83 (01): : 124 - 136
  • [47] Text-to-Speech Synthesis: Literature Review with an Emphasis on Malayalam Language
    Jasir, M. P.
    Balakrishnan, Kannan
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2022, 21 (04)
  • [48] Subjective evaluation and comparison of the speech quality of text-to-speech systems for the German language
    Klaus, H
    Fellbaum, K
    Sotscheck, J
    ACUSTICA, 1997, 83 (01): : 124 - 136
  • [49] Text-to-speech processing using African language as case study
    Ogwu, F. J.
    Talib, M.
    Odejobi, O. A.
    JOURNAL OF DISCRETE MATHEMATICAL SCIENCES & CRYPTOGRAPHY, 2006, 9 (02): : 365 - 382
  • [50] Cross-Language Phonemisation In German Text-To-Speech Synthesis
    Steigner, Jochen
    Schroeder, Marc
    INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, VOLS 1-4, 2007, : 833 - +