MAMGAN: Multiscale attention metric GAN for monaural speech enhancement in the time domain

被引:8
|
作者
Guo, Huimin [1 ,2 ]
Jian, Haifang [1 ]
Wang, Yequan [3 ]
Wang, Hongchang [1 ,2 ]
Zhao, Xiaofan [3 ]
Zhu, Wenqi [4 ]
Cheng, Qinghua [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Semicond, Lab Solid State Optoelect Informat Technol, Beijing 100083, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Beijing Acad Artificial Intelligence, Beijing 100089, Peoples R China
[4] North China Univ Technol, Sch Informat Sci & Technol, Beijing 100144, Peoples R China
关键词
Speech enhancement; Time domain; Multiscale attention; Attention metric discriminator; RECURRENT NEURAL-NETWORK; SELF-ATTENTION; U-NET; NOISE;
D O I
10.1016/j.apacoust.2023.109385
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In the speech enhancement (SE) task, the mismatch between the objective function used to train the SE model, and the evaluation metric will lead to the low quality of the generated speech. Although existing studies have attempted to use the metric discriminator to learn the alternative function of evaluation metric from data to guide generator updates, the metric discriminator's simple structure cannot better approximate the function of the evaluation metric, thus limiting the performance of SE. This paper proposes a multiscale attention metric generative adversarial network (MAMGAN) to resolve this problem. In the metric discriminator, the attention mechanism is introduced to emphasize the meaningful features of spatial direction and channel direction to avoid the feature loss caused by direct average pooling to better approximate the calculation of the evaluation metric and further improve SE's performance. In addition, driven by the effectiveness of the self-attention mechanism in capturing long-term dependence, we construct a multiscale attention module (MSAM). It fully considers the multiple representations of signals, which can better model the features of long sequences. The ablation experiment verifies the effectiveness of the attention metric discriminator and the MSAM. Quantitative analysis on the Voice Bank + DEMAND dataset shows that MAMGAN outperforms various time-domain SE methods with a 3.30 perceptual evaluation of speech quality score.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] MambaGAN: Mamba based Metric GAN for Monaural Speech Enhancement
    Luo, Tianhao
    Zhou, Feng
    Bai, Zhongxin
    [J]. 2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 411 - 416
  • [2] TIME-FREQUENCY ATTENTION FOR MONAURAL SPEECH ENHANCEMENT
    Zhang, Qiquan
    Song, Qi
    Ni, Zhaoheng
    Nicolson, Aaron
    Li, Haizhou
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7852 - 7856
  • [3] GAN-in-GAN for Monaural Speech Enhancement
    Duan, Yicun
    Ren, Jianfeng
    Yu, Heng
    Jiang, Xudong
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 853 - 857
  • [4] CMGAN: Conformer-Based Metric-GAN for Monaural Speech Enhancement
    Abdulatif, Sherif
    Cao, Ruizhe
    Yang, Bin
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 2477 - 2493
  • [5] Harmonic Attention for Monaural Speech Enhancement
    Wang, Tianrui
    Zhu, Weibin
    Gao, Yingying
    Zhang, Shilei
    Feng, Junlan
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 2424 - 2436
  • [6] A Time-domain Monaural Speech Enhancement with Feedback Learning
    Li, Andong
    Zheng, Chengshi
    Cheng, Linjuan
    Peng, Renhua
    Li, Xiaodong
    [J]. 2020 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2020, : 769 - 774
  • [7] On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement
    Kolbaek, Morten
    Tan, Zheng-Hua
    Jensen, Soren Holdt
    Jensen, Jesper
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 : 825 - 838
  • [8] A Recursive Network with Dynamic Attention for Monaural Speech Enhancement
    Li, Andong
    Zheng, Chengshi
    Fan, Cunhang
    Peng, Renhua
    Li, Xiaodong
    [J]. INTERSPEECH 2020, 2020, : 2422 - 2426
  • [9] REDUNDANT CONVOLUTIONAL NETWORK WITH ATTENTION MECHANISM FOR MONAURAL SPEECH ENHANCEMENT
    Lan, Tian
    Lyu, Yilan
    Hui, Guoqiang
    Mokhosi, Refuoe
    Li, Sen
    Liu, Qiao
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6654 - 6658
  • [10] Multi-stage attention network for monaural speech enhancement
    Wang, Kunpeng
    Lu, Wenjing
    Liu, Peng
    Yao, Juan
    Li, Huafeng
    [J]. IET SIGNAL PROCESSING, 2023, 17 (03)