A Model Compression Method With Matrix Product Operators for Speech Enhancement

被引:9
|
作者
Sun, Xingwei [1 ,2 ]
Gao, Ze-Feng [3 ]
Lu, Zhong-Yi [3 ]
Li, Junfeng [1 ,2 ]
Yan, Yonghong [1 ,2 ,4 ]
机构
[1] Chinese Acad Sci, Inst Acoust, Key Lab Speech Acoust & Content Understanding, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[3] Renmin Univ China, Sch Sci, Beijing 100872, Peoples R China
[4] Chinese Acad Sci, Xinjiang Tech Inst Phys & Chem, Xinjiang Key Lab Minor Speech & Language Informat, Xinjiang 830011, Peoples R China
基金
中国国家自然科学基金;
关键词
Speech enhancement; Noise measurement; Acoustics; Matrix decomposition; Training; Neural networks; model compression; pruning; matrix product operators; NEURAL-NETWORK; NOISE;
D O I
10.1109/TASLP.2020.3030495
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
The deep neural network (DNN) based speech enhancement approaches have achieved promising performance. However, the number of parameters involved in these methods is usually enormous for the real applications of speech enhancement on the device with the limited resources. This seriously restricts the applications. To deal with this issue, model compression techniques are being widely studied. In this paper, we propose a model compression method based on matrix product operators (MPO) to substantially reduce the number of parameters in DNN models for speech enhancement. In this method, the weight matrices in the linear transformations of neural network model are replaced by the MPO decomposition format before training. In experiment, this process is applied to the causal neural network models, such as the feedforward multilayer perceptron (MLP) and long short-term memory (LSTM) models. Both MLP and LSTM models with/without compression are then utilized to estimate the ideal ratio mask for monaural speech enhancement. The experimental results show that our proposed MPO-based method outperforms the widely-used pruning method for speech enhancement under various compression rates, and further improvement can be achieved with respect to low compression rates. Our proposal provides an effective model compression method for speech enhancement, especially in cloud-free application.
引用
收藏
页码:2837 / 2847
页数:11
相关论文
共 50 条
  • [1] Local matrix product operators: Canonical form, compression, and control theory
    Parker, Daniel E.
    Cao, Xiangyu
    Zaletel, Michael P.
    PHYSICAL REVIEW B, 2020, 102 (03)
  • [2] Thermal evolution of the Schwinger model with matrix product operators
    Banuls, M. C.
    Cichy, K.
    Cirac, J. I.
    Jansen, K.
    Saito, H.
    PHYSICAL REVIEW D, 2015, 92 (03):
  • [3] Chiral condensate in the Schwinger model with matrix product operators
    Banuls, Mari Carmen
    Cichy, Krzysztof
    Jansen, Karl
    Saito, Hana
    PHYSICAL REVIEW D, 2016, 93 (09)
  • [4] Efficient Model Selection for Speech Enhancement Using a Deflation Method for Nonnegative Matrix Factorization
    Kim, Minje
    Smaragdis, Paris
    2014 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2014, : 537 - 541
  • [5] Towards Model Compression for Deep Learning Based Speech Enhancement
    Tan, Ke
    Wang, DeLiang
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 1785 - 1794
  • [6] ENHANCEMENT AND BANDWIDTH COMPRESSION OF NOISY SPEECH
    LIM, JS
    OPPENHEIM, AV
    PROCEEDINGS OF THE IEEE, 1979, 67 (12) : 1586 - 1604
  • [7] Entanglement compression in scale space: From the multiscale entanglement renormalization ansatz to matrix product operators
    Van Acoleyen, Karel
    Hallam, Andrew
    Bal, Matthias
    Hauru, Markus
    Haegeman, Jutho
    Verstraete, Frank
    PHYSICAL REVIEW B, 2020, 102 (16)
  • [9] Block encoding of matrix product operators
    Nibbi, Martina
    Mendl, Christian B.
    PHYSICAL REVIEW A, 2024, 110 (04)
  • [10] Model Compression by Iterative Pruning with Knowledge Distillation and Its Application to Speech Enhancement
    Wei, Zeyuan
    Li, Hao
    Zhang, Xueliang
    INTERSPEECH 2022, 2022, : 941 - 945