MM-TTS: Multi-Modal Prompt Based Style Transfer for Expressive Text-to-Speech Synthesis

被引:0
|
作者
Guan, Wenhao [1 ]
Li, Yishuang [2 ]
Li, Tao [1 ]
Huang, Hukai [1 ]
Wang, Feng [1 ]
Lin, Jiayan [1 ]
Huang, Lingyan [1 ]
Li, Lin [2 ,3 ]
Hong, Qingyang [1 ]
机构
[1] Xiamen Univ, Sch Informat, Xiamen, Peoples R China
[2] Xiamen Univ, Inst Artificial Intelligence, Xiamen, Peoples R China
[3] Xiamen Univ, Sch Elect Sci & Engn, Xiamen, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The style transfer task in Text-to-Speech (TTS) refers to the process of transferring style information into text content to generate corresponding speech with a specific style. However, most existing style transfer approaches are either based on fixed emotional labels or reference speech clips, which cannot achieve flexible style transfer. Recently, some methods have adopted text descriptions to guide style transfer. In this paper, we propose a more flexible multi-modal and style controllable TTS framework named MM-TTS. It can utilize any modality as the prompt in unified multi-modal prompt space, including reference speech, emotional facial images, and text descriptions, to control the style of the generated speech in a system. The challenges of modeling such a multi-modal style controllable TTS mainly lie in two aspects: 1) aligning the multi-modal information into a unified style space to enable the input of arbitrary modality as the style prompt in a single system, and 2) efficiently transferring the unified style representation into the given text content, thereby empowering the ability to generate prompt style-related voice. To address these problems, we propose an aligned multi-modal prompt encoder that embeds different modalities into a unified style space, supporting style transfer for different modalities. Additionally, we present a new adaptive style transfer method named Style Adaptive Convolutions (SAConv) to achieve a better style representation. Furthermore, we design a Rectified Flow based Refiner to solve the problem of over-smoothing Mel-spectrogram and generate audio of higher fidelity. Since there is no public dataset for multi-modal TTS, we construct a dataset named MEAD-TTS, which is related to the field of expressive talking head. Our experiments on the MEAD-TTS dataset and out-of-domain datasets demonstrate that MM-TTS can achieve satisfactory results based on multimodal prompts. The audio samples and constructed dataset are available at https://multimodal- tts.github.io.
引用
收藏
页码:18117 / 18125
页数:9
相关论文
共 24 条
  • [1] M3TTS: Multi-modal text-to-speech of multi-scale style control for dubbing
    Liu, Yan
    Wei, Li -Fang
    Qian, Xinyuan
    Zhang, Tian-Hao
    Chen, Song-Lu
    Yin, Xu-Cheng
    PATTERN RECOGNITION LETTERS, 2024, 179 : 158 - 164
  • [2] CALM: Contrastive Cross-modal Speaking Style Modeling for Expressive Text-to-Speech Synthesis
    Meng, Yi
    Li, Xiang
    Wu, Zhiyong
    Li, Tingtian
    Sun, Zixun
    Xiao, Xinyu
    Sun, Chi
    Zhan, Hui
    Meng, Helen
    INTERSPEECH 2022, 2022, : 5533 - 5537
  • [3] E-TTS: Expressive Text-to-Speech Synthesis for Hindi Using Data Augmentation
    Gupta, Ishika
    Murthy, Hema A.
    SPEECH AND COMPUTER, SPECOM 2023, PT II, 2023, 14339 : 243 - 257
  • [4] Expressive Text-to-Speech Synthesis using Text Chat Dataset with Speaking Style Information
    Homma Y.
    Kanagawa H.
    Kobayashi N.
    Ijima Y.
    Saito K.
    Transactions of the Japanese Society for Artificial Intelligence, 2023, 38 (03)
  • [5] Rule-Based Storytelling Text-to-Speech (TTS) Synthesis
    Ramli, Izzad
    Seman, Noraini
    Ardi, Norizah
    Jamil, Nursuriati
    2016 3RD INTERNATIONAL CONFERENCE ON MECHANICS AND MECHATRONICS RESEARCH (ICMMR 2016), 2016, 77
  • [6] ENHANCING SPEAKING STYLES IN CONVERSATIONAL TEXT-TO-SPEECH SYNTHESIS WITH GRAPH-BASED MULTI-MODAL CONTEXT MODELING
    Li, Jingbei
    Meng, Yi
    Li, Chenyi
    Wu, Zhiyong
    Meng, Helen
    Weng, Chao
    Su, Dan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7917 - 7921
  • [7] Incorporating Cross-speaker Style Transfer for Multi-language Text-to-Speech
    Shang, Zengqiang
    Huang, Zhihua
    Zhang, Haozhe
    Zhang, Pengyuan
    Yan, Yonghong
    INTERSPEECH 2021, 2021, : 1619 - 1623
  • [8] FINE-GRAINED STYLE CONTROL IN TRANSFORMER-BASED TEXT-TO-SPEECH SYNTHESIS
    Chen, Li-Wei
    Rudnicky, Alexander
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7907 - 7911
  • [9] VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis
    Lu, Hui
    Wu, Zhiyong
    Wu, Xixin
    Li, Xu
    Kang, Shiyin
    Liu, Xunying
    Meng, Helen
    INTERSPEECH 2021, 2021, : 3775 - 3779
  • [10] ICA-based hierarchical text classification for multi-domain text-to-speech synthesis
    Sevillano, X
    Alías, F
    Socoró, JC
    2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL V, PROCEEDINGS: DESIGN AND IMPLEMENTATION OF SIGNAL PROCESSING SYSTEMS INDUSTRY TECHNOLOGY TRACKS MACHINE LEARNING FOR SIGNAL PROCESSING MULTIMEDIA SIGNAL PROCESSING SIGNAL PROCESSING FOR EDUCATION, 2004, : 697 - 700