BERTIVITS: The Posterior Encoder Fusion of Pre-Trained Models and Residual Skip Connections for End-to-End Speech Synthesis

被引:0
|
作者
Wang, Zirui [1 ]
Song, Minqi [1 ]
Zhou, Dongbo [1 ]
机构
[1] Cent China Normal Univ, Fac Artificial Intelligence Educ, Wuhan 430079, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 12期
关键词
pre-trained model; text to speech; neural TTS; speech synthesis; end-to-end model;
D O I
10.3390/app14125060
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Enhancing the naturalness and rhythmicity of generated audio in end-to-end speech synthesis is crucial. The current state-of-the-art (SOTA) model, VITS, utilizes a conditional variational autoencoder architecture. However, it faces challenges, such as limited robustness, due to training solely on text and spectrum data from the training set. Particularly, the posterior encoder struggles with mid- and high-frequency feature extraction, impacting waveform reconstruction. Existing efforts mainly focus on prior encoder enhancements or alignment algorithms, neglecting improvements to spectrum feature extraction. In response, we propose BERTIVITS, a novel model integrating BERT into VITS. Our model features a redesigned posterior encoder with residual connections and utilizes pre-trained models to enhance spectrum feature extraction. Compared to VITS, BERTIVITS shows significant subjective MOS score improvements (0.16 in English, 0.36 in Chinese) and objective Mel-Cepstral coefficient reductions (0.52 in English, 0.49 in Chinese). BERTIVITS is tailored for single-speaker scenarios, improving speech synthesis technology for applications like post-class tutoring or telephone customer service.
引用
收藏
页数:14
相关论文
共 35 条
  • [1] End-to-End Speech Translation with Pre-trained Models and Adapters: UPC at IWSLT 2021
    Gallego, Gerard, I
    Tsiamas, Ioannis
    Escolano, Carlos
    Fonollosa, Jose A. R.
    Costa-jussa, Marta R.
    IWSLT 2021: THE 18TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE TRANSLATION, 2021, : 110 - 119
  • [2] SPEECH SENTIMENT ANALYSIS VIA PRE-TRAINED FEATURES FROM END-TO-END ASR MODELS
    Lu, Zhiyun
    Cao, Liangliang
    Zhang, Yu
    Chiu, Chung-Cheng
    Fan, James
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7149 - 7153
  • [3] End-to-end speech topic classification based on pre-trained model Wavlm
    Cao, Tengfei
    He, Liang
    Niu, Fangjing
    2022 13TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2022, : 369 - 373
  • [4] IMPROVING NON-AUTOREGRESSIVE END-TO-END SPEECH RECOGNITION WITH PRE-TRAINED ACOUSTIC AND LANGUAGE MODELS
    Deng, Keqi
    Yang, Zehui
    Watanabe, Shinji
    Higuchi, Yosuke
    Cheng, Gaofeng
    Zhang, Pengyuan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8522 - 8526
  • [5] End-to-End Pre-trained Dialogue System for Automatic Diagnosis
    Wang, Yuan
    Li, Zekun
    Zeng, Leilei
    Zhao, Tingting
    CCKS 2021 - EVALUATION TRACK, 2022, 1553 : 82 - 91
  • [6] PEIT: Bridging the Modality Gap with Pre-trained Models for End-to-End Image Translation
    Zhu, Shaolin
    Li, Shangjie
    Lei, Yikun
    Xiong, Deyi
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 13433 - 13447
  • [7] End-to-End Visual Editing with a Generatively Pre-trained Artist
    Brown, Andrew
    Fu, Cheng-Yang
    Parkhi, Omkar
    Berg, Tamara L.
    Vedaldi, Andrea
    COMPUTER VISION - ECCV 2022, PT XV, 2022, 13675 : 18 - 35
  • [8] FINE-TUNING OF PRE-TRAINED END-TO-END SPEECH RECOGNITION WITH GENERATIVE ADVERSARIAL NETWORKS
    Haidar, Md Akmal
    Rezagholizadeh, Mehdi
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6204 - 6208
  • [9] End-to-End Speech Enhancement Using Fully Convolutional Networks with Skip Connections
    Wang, Dujuan
    Bao, Changchun
    2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 890 - 895
  • [10] Self-Supervised Pre-Trained Speech Representation Based End-to-End Mispronunciation Detection and Diagnosis of Mandarin
    Shen, Yunfei
    Liu, Qingqing
    Fan, Zhixing
    Liu, Jiajun
    Wumaier, Aishan
    IEEE ACCESS, 2022, 10 : 106451 - 106462