On the compression of neural networks using e0-norm regularization and weight pruning

被引:6
|
作者
Oliveira, Felipe Dennis de Resende [1 ]
Batista, Eduardo Luiz Ortiz [1 ]
Seara, Rui [1 ]
机构
[1] Univ Fed Santa Catarina, Dept Elect Engn, LINSE Circuits & Signal Proc Lab, BR-88040900 Florianopolis, SC, Brazil
关键词
Machine learning; Neural networks; Network compression; Norm regularization; Weight pruning;
D O I
10.1016/j.neunet.2023.12.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the growing availability of high-capacity computational platforms, implementation complexity still has been a great concern for the real-world deployment of neural networks. This concern is not exclusively due to the huge costs of state-of-the-art network architectures, but also due to the recent push towards edge intelligence and the use of neural networks in embedded applications. In this context, network compression techniques have been gaining interest due to their ability for reducing deployment costs while keeping inference accuracy at satisfactory levels. The present paper is dedicated to the development of a novel compression scheme for neural networks. To this end, a new form of e0-norm-based regularization is firstly developed, which is capable of inducing strong sparseness in the network during training. Then, targeting the smaller weights of the trained network with pruning techniques, smaller yet highly effective networks can be obtained. The proposed compression scheme also involves the use of e2-norm regularization to avoid overfitting as well as fine tuning to improve the performance of the pruned network. Experimental results are presented aiming to show the effectiveness of the proposed scheme as well as to make comparisons with competing approaches.
引用
收藏
页码:343 / 352
页数:10
相关论文
共 50 条
  • [41] Compression of Deep Convolutional Neural Network Using Additional Importance-Weight-Based Filter Pruning Approach
    Sawant, Shrutika S.
    Wiedmann, Marco
    Goeb, Stephan
    Holzer, Nina
    Lang, Elmar W.
    Goetz, Theresa
    APPLIED SCIENCES-BASEL, 2022, 12 (21):
  • [42] Decision tree pruning using backpropagation neural networks
    Kijsirikul, B
    Chongkasemwongse, K
    IJCNN'01: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2001, : 1876 - 1880
  • [43] Progressive compression and weight reinforcement for spiking neural networks
    Elbez, Hammouda
    Benhaoua, Mohammed Kamel
    Devienne, Philippe
    Boulet, Pierre
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (11):
  • [44] Sparseness Ratio Allocation and Neuron Re-pruning for Neural Networks Compression
    Guo, Li
    Zhou, Dajiang
    Zhou, Jinjia
    Kimura, Shinji
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [45] Fast Convolutional Neural Networks in Low Density FPGAs Using Zero-Skipping and Weight Pruning
    Vestias, Mario P.
    Duarte, Rui Policarpo
    de Sousa, Jose T.
    Neto, Horacio C.
    ELECTRONICS, 2019, 8 (11)
  • [46] Joint inversion of gravity and gravity gradient data using smoothed L0 norm regularization algorithm with sensitivity matrix compression
    Niu, Tingting
    Zhang, Gang
    Zhang, Mengting
    Zhang, Guibin
    FRONTIERS IN EARTH SCIENCE, 2023, 11
  • [47] Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks
    Xiao, Xuanzhe
    Li, Zeng
    Xie, Chuanlong
    Zhou, Fengwei
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 236 - 247
  • [48] A novel weight pruning strategy for light weight neural networks with application to the diagnosis of skin disease
    Xiang, Kun
    Peng, Linlin
    Yang, Haiqiong
    Li, Mingxin
    Cao, Zhongfa
    Jiang, Shancheng
    Qu, Gang
    APPLIED SOFT COMPUTING, 2021, 111
  • [49] Learning Optimized Structure of Neural Networks by Hidden Node Pruning With L1 Regularization
    Xie, Xuetao
    Zhang, Huaqing
    Wang, Junze
    Chang, Qin
    Wang, Jian
    Pal, Nikhil R.
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (03) : 1333 - 1346
  • [50] IMAGE COMPRESSION USING NEURAL NETWORKS
    WALKER, NP
    EGLEN, SJ
    LAWRENCE, BA
    GEC JOURNAL OF RESEARCH, 1994, 11 (02): : 66 - 75