On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement

被引:80
|
作者
Kolbaek, Morten [1 ]
Tan, Zheng-Hua [1 ]
Jensen, Soren Holdt [1 ]
Jensen, Jesper [1 ,2 ]
机构
[1] Aalborg Univ, Dept Elect Syst, DK-9220 Aalborg, Denmark
[2] Oticon AS, DK-2765 Smorum, Denmark
关键词
Speech enhancement; fully convolutional neural networks; time-domain; objective intelligibility; HEARING-IMPAIRED LISTENERS; MEAN-SQUARE ERROR; INCREASE INTELLIGIBILITY; PERCEPTUAL EVALUATION; NEURAL-NETWORK; ALGORITHM; NOISE; SEPARATION; QUALITY;
D O I
10.1109/TASLP.2020.2968738
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Many deep learning-based speech enhancement algorithms are designed to minimize the mean-square error (MSE) in some transform domain between a predicted and a target speech signal. However, optimizing for MSE does not necessarily guarantee high speech quality or intelligibility, which is the ultimate goal of many speech enhancement algorithms. Additionally, only little is known about the impact of the loss function on the emerging class of time-domain deep learning-based speech enhancement systems. We study how popular loss functions influence the performance of time-domain deep learning-based speech enhancement systems. First, we demonstrate that perceptually inspired loss functions might be advantageous over classical loss functions like MSE. Furthermore, we show that the learning rate is a crucial design parameter even for adaptive gradient-based optimizers, which has been generally overlooked in the literature. Also, we found that waveform matching performance metrics must be used with caution as they in certain situations can fail completely. Finally, we show that a loss function based on scale-invariant signal-to-distortion ratio (SI-SDR) achieves good general performance across a range of popular speech enhancement evaluation metrics, which suggests that SI-SDR is a good candidate as a general-purpose loss function for speech enhancement systems.
引用
收藏
页码:825 / 838
页数:14
相关论文
共 50 条
  • [1] A Time-domain Monaural Speech Enhancement with Feedback Learning
    Li, Andong
    Zheng, Chengshi
    Cheng, Linjuan
    Peng, Renhua
    Li, Xiaodong
    [J]. 2020 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2020, : 769 - 774
  • [2] Loss Functions for Deep Monaural Speech Enhancement
    Freiwald, Jan
    Schoenherr, Lea
    Schymura, Christopher
    Zeiler, Steffen
    Kolossa, Dorothea
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] On Synthesis for Supervised Monaural Speech Separation in Time Domain
    Chen, Jingjing
    Mao, Qirong
    Liu, Dons
    [J]. INTERSPEECH 2020, 2020, : 2627 - 2631
  • [4] Group Multi-Scale convolutional Network for Monaural Speech Enhancement in Time-domain
    Yu, Juntao
    Jiang, Ting
    Yu, Jiacheng
    [J]. 2021 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2021, : 646 - 650
  • [5] Two-Stage Learning and Fusion Network With Noise Aware for Time-Domain Monaural Speech Enhancement
    Xiang, Xiaoxiao
    Zhang, Xiaojuan
    Chen, Haozhe
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1754 - 1758
  • [6] Visually Assisted Time-Domain Speech Enhancement
    Ideli, Elham
    Sharpe, Bruce
    Bajic, Ivan, V
    Vaughan, Rodney G.
    [J]. 2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [7] A New Framework for Supervised Speech Enhancement in the Time Domain
    Pandey, Ashutosh
    Wang, Deliang
    [J]. 19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 1136 - 1140
  • [8] MAMGAN: Multiscale attention metric GAN for monaural speech enhancement in the time domain
    Guo, Huimin
    Jian, Haifang
    Wang, Yequan
    Wang, Hongchang
    Zhao, Xiaofan
    Zhu, Wenqi
    Cheng, Qinghua
    [J]. APPLIED ACOUSTICS, 2023, 209
  • [9] TIME-DOMAIN LOSS MODULATION BASED ON OVERLAP RATIO FOR MONAURAL CONVERSATIONAL SPEAKER SEPARATION
    Taherian, Hassan
    Wang, DeLiang
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5744 - 5748
  • [10] Time-domain speech enhancement using generative adversarial networks
    Pascual, Santiago
    Serra, Joan
    Bonafonte, Antonio
    [J]. SPEECH COMMUNICATION, 2019, 114 : 10 - 21