PFRNet: Dual-Branch Progressive Fusion Rectification Network for Monaural Speech Enhancement

被引:8
|
作者
Yu, Runxiang [1 ,2 ]
Zhao, Ziwei [1 ,2 ]
Ye, Zhongfu [1 ,2 ]
机构
[1] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei 230027, Anhui, Peoples R China
[2] Natl Engn Res Ctr Speech & Language Informat Proc, Hefei 230027, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Transformers; Speech enhancement; Tensors; Convolution; Decoding; Time-frequency analysis; Fusion rectification block; interactive time-frequency improved transformer; monaural speech enhancement;
D O I
10.1109/LSP.2022.3222045
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, the transformer-based dual-branch magnitude and complex spectrum estimation framework achieves state-of-the-art performance for monaural speech enhancement. However, the insufficient utilization of the interactive information in the middle layers makes each branch lack the ability of compensation and rectification. To address this problem, this letter proposes a novel dual-branch progressive fusion rectification network (PFRNet) for monaural speech enhancement. PFRNet is an encoder-decoder-based dual-branch structure with interactive improved real & complex transformers. In PFRNet, the fusion rectification block is proposed to convert the implicit relationship of the two branches into a fusion feature by the frequency-domain mutual attention mechanism. The fusion feature provides a platform for the interaction in the middle layers. The interactive time-frequency improved real & complex transformer can make better use of the long-term dependencies in the time-frequency domain. Experimental results show that the proposed PFRNet outperforms most advanced dual-branch speech enhancement approaches and previous advanced systems in terms of speech quality and intelligibility.
引用
收藏
页码:2358 / 2362
页数:5
相关论文
共 50 条
  • [1] Scale-aware dual-branch complex convolutional recurrent network for monaural speech enhancement
    Li, Yihao
    Sun, Meng
    Zhang, Xiongwei
    Van Hamme, Hugo
    [J]. COMPUTER SPEECH AND LANGUAGE, 2024, 86
  • [2] DBFNet: A Dual-Branch Fusion Network for Underwater Image Enhancement
    Sun, Kaichuan
    Tian, Yubo
    [J]. REMOTE SENSING, 2023, 15 (05)
  • [3] Progressive Dual-Branch Network for Low-Light Image Enhancement
    Cui, Hengshuai
    Li, Jinjiang
    Hua, Zhen
    Fan, Linwei
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [4] Convolutional fusion network for monaural speech enhancement
    Xian, Yang
    Sun, Yang
    Wang, Wenwu
    Naqvi, Syed Mohsen
    [J]. NEURAL NETWORKS, 2021, 143 : 97 - 107
  • [5] A Dual-Branch Speech Enhancement Model with Harmonic Repair
    Jia, Lizhen
    Xu, Yanyan
    Ke, Dengfeng
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (04):
  • [6] DBENet: Dual-Branch Brightness Enhancement Fusion Network for Low-Light Image Enhancement
    Chen, Yongqiang
    Wen, Chenglin
    Liu, Weifeng
    He, Wei
    [J]. ELECTRONICS, 2023, 12 (18)
  • [7] Conditional generative adversarial network with dual-branch progressive generator for underwater image enhancement
    Lin, Peng
    Wang, Yafei
    Wang, Guangyuan
    Yan, Xiaohong
    Jiang, Guangqi
    Fu, Xianping
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2022, 108
  • [8] A Dual-Branch Fusion Network for Surgical Instrument Segmentation
    Zhengzhou University, School Of Electrical And Information Engineering, Zhengzhou, Henan
    450001, China
    不详
    100190, China
    [J]. IEEE Trans. Med. Rob. Bion., 4 (1542-1554): : 1542 - 1554
  • [9] A Dual-branch Network for Infrared and Visible Image Fusion
    Fu, Yu
    Wu, Xiao-Jun
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10675 - 10680
  • [10] DBT-Net: Dual-Branch Federative Magnitude and Phase Estimation With Attention-in-Attention Transformer for Monaural Speech Enhancement
    Yu, Guochen
    Li, Andong
    Wang, Hui
    Wang, Yutian
    Ke, Yuxuan
    Zheng, Chengshi
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2629 - 2644