On the compression of neural networks using e0-norm regularization and weight pruning

被引:6
|
作者
Oliveira, Felipe Dennis de Resende [1 ]
Batista, Eduardo Luiz Ortiz [1 ]
Seara, Rui [1 ]
机构
[1] Univ Fed Santa Catarina, Dept Elect Engn, LINSE Circuits & Signal Proc Lab, BR-88040900 Florianopolis, SC, Brazil
关键词
Machine learning; Neural networks; Network compression; Norm regularization; Weight pruning;
D O I
10.1016/j.neunet.2023.12.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the growing availability of high-capacity computational platforms, implementation complexity still has been a great concern for the real-world deployment of neural networks. This concern is not exclusively due to the huge costs of state-of-the-art network architectures, but also due to the recent push towards edge intelligence and the use of neural networks in embedded applications. In this context, network compression techniques have been gaining interest due to their ability for reducing deployment costs while keeping inference accuracy at satisfactory levels. The present paper is dedicated to the development of a novel compression scheme for neural networks. To this end, a new form of e0-norm-based regularization is firstly developed, which is capable of inducing strong sparseness in the network during training. Then, targeting the smaller weights of the trained network with pruning techniques, smaller yet highly effective networks can be obtained. The proposed compression scheme also involves the use of e2-norm regularization to avoid overfitting as well as fine tuning to improve the performance of the pruned network. Experimental results are presented aiming to show the effectiveness of the proposed scheme as well as to make comparisons with competing approaches.
引用
收藏
页码:343 / 352
页数:10
相关论文
共 50 条
  • [21] Structured Pruning of Convolutional Neural Networks via L1 Regularization
    Yang, Chen
    Yang, Zhenghong
    Khattak, Abdul Mateen
    Yang, Liu
    Zhang, Wenxin
    Gao, Wanlin
    Wang, Minjuan
    IEEE ACCESS, 2019, 7 : 106385 - 106394
  • [22] On rule pruning using fuzzy neural networks
    Department of Computer Science, Regional Engineering College, Durgapur, W.B., India
    Fuzzy Sets Syst, 3 (335-347):
  • [23] On rule pruning using fuzzy neural networks
    Pal, NR
    Pal, T
    FUZZY SETS AND SYSTEMS, 1999, 106 (03) : 335 - 347
  • [24] EXPLORING THE EFFECT OF l0/l2 REGULARIZATION IN NEURAL NETWORK PRUNING USING THE LC TOOLKIT
    Idelbayev, Yerlan
    Carreira-Perpinan, Miguel A.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3373 - 3377
  • [25] Pruning networks at once via nuclear norm-based regularization and bi-level optimization
    Lee, Donghyeon
    Lee, Eunho
    Kang, Jaehyuk
    Hwang, Youngbae
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2025, 251
  • [26] Neural classifier construction using regularization, pruning and test error estimation
    Hintz-Madsen, M
    Hansen, LK
    Larsen, J
    Pedersen, MW
    Larsen, M
    NEURAL NETWORKS, 1998, 11 (09) : 1659 - 1670
  • [27] New pruning techniques for constructive neural networks with application to image compression
    Ma, L
    Khorasani, K
    SIGNAL PROCESSING, SENSOR FUSION, AND TARGET RECOGNITION IX, 2000, 4052 : 298 - 308
  • [28] Compression of Deep Neural Networks by combining pruning and low rank decomposition
    Goyal, Saurabh
    Choudhury, Anamitra Roy
    Sharma, Vivek
    2019 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2019, : 952 - 958
  • [29] Norm Loss: An efficient yet effective regularization method for deep neural networks
    Georgiou, Theodoros
    Schmitt, Sebastian
    Back, Thomas
    Chen, Wei
    Lew, Michael
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 8812 - 8818
  • [30] Pruning Deep Neural Networks with l0-constrained Optimization
    Phan, Dzung T.
    Nguyen, Lam M.
    Nguyen, Nam H.
    Kalagnanam, Jayant R.
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1214 - 1219