EMOVIE: A Mandarin Emotion Speech Dataset with a Simple Emotional Text-to-Speech Model

被引:5
|
作者
Cui, Chenye [1 ]
Ren, Yi [1 ]
Liu, Jinglin [1 ]
Chen, Feiyang [1 ]
Huang, Rongjie [1 ]
Lei, Ming [2 ]
Zhao, Zhou [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Alibaba Grp, Hangzhou, Peoples R China
来源
关键词
emotional speech dataset; speech synthesis; emotional text-to-speech; speech emotion classification;
D O I
10.21437/Interspeech.2021-1148
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Recently, there has been an increasing interest in neural speech synthesis. While the deep neural network achieves the stateof-the-art result in text-to-speech (TTS) tasks, how to generate a more emotional and more expressive speech is becoming a new challenge to researchers due to the scarcity of high-quality emotion speech dataset and the lack of advanced emotional TTS model. In this paper, we first briefly introduce and publicly release a Mandarin emotion speech dataset including 9,724 samples with audio files and its emotion human-labeled annotation. After that, we propose a simple but efficient architecture for emotional speech synthesis called EMSpeech. Unlike those models which need additional reference audio as input, our model could predict emotion labels just from the input text and generate more expressive speech conditioned on the emotion embedding. In the experiment phase, we first validate the effectiveness of our dataset by an emotion classification task. Then we train our model on the proposed dataset and conduct a series of subjective evaluations. Finally, by showing a comparable performance in the emotional speech synthesis task, we successfully demonstrate the ability of the proposed model.
引用
收藏
页码:2766 / 2770
页数:5
相关论文
共 50 条
  • [31] TEXT-TO-SPEECH SYNTHESIS
    SPROAT, RW
    OLIVE, JP
    [J]. AT&T TECHNICAL JOURNAL, 1995, 74 (02): : 35 - 44
  • [32] Text-to-speech for customers
    不详
    [J]. EXPERT SYSTEMS, 1998, 15 (01) : 66 - 66
  • [33] Model architectures to extrapolate emotional expressions in DNN-based text-to-speech
    Inoue, Katsuki
    Hara, Sunao
    Abe, Masanobu
    Hojo, Nobukatsu
    Ijima, Yusuke
    [J]. SPEECH COMMUNICATION, 2021, 126 : 35 - 43
  • [34] An emotional speech synthesis markup language processor for multi-speaker and emotional text-to-speech applications
    Ryu, Se-Hui
    Cho, Hee
    Lee, Ju-Hyun
    Hong, Ki-Hyung
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF KOREA, 2021, 40 (05): : 523 - 529
  • [35] High-Quality Prosody Generation in Mandarin Text-to-Speech System
    Guo, Qing
    Zhang, Jie
    Katae, Nobuyuki
    Yu, Hao
    [J]. FUJITSU SCIENTIFIC & TECHNICAL JOURNAL, 2010, 46 (01): : 40 - 46
  • [36] High-quality prosody generation in Mandarin text-to-speech system
    Guo, Qing
    Zhang, Jie
    Katae, Nobuyuki
    Yu, Hao
    [J]. Fujitsu Scientific and Technical Journal, 2010, 46 (01): : 40 - 46
  • [37] An RNN-based prosodic information synthesizer for Mandarin text-to-speech
    Chen, SH
    Hwang, SH
    Wang, YR
    [J]. IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, 1998, 6 (03): : 226 - 239
  • [38] AN RNN-BASED SPECTRAL INFORMATIONG ENERATION FOR MANDARIN TEXT-TO-SPEECH
    EOOO/CCL, Industrial Technology Research Institute, Chutung, Hsinchu, Taiwan
    不详
    [J]. Eur. Conf. Speech Commun. Technol., EUROSPEECH, 1600, (549-552):
  • [39] The First Vietnamese FOSD-Tacotron-2-based Text-to-Speech Model Dataset
    Tran, Duc Chung
    [J]. DATA IN BRIEF, 2020, 31
  • [40] An efficient text analyzer with prosody generator-driven approach for mandarin text-to-speech
    Hwang, SH
    Yeh, CY
    [J]. 2003 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL I, PROCEEDINGS: SPEECH PROCESSING I, 2003, : 488 - 491