Magnitude and Uncertainty Pruning Criterion for Neural Networks

被引:0
|
作者
Ko, Vinnie [1 ]
Oehmcke, Stefan [2 ]
Gieseke, Fabian [2 ]
机构
[1] Univ Oslo, Oslo, Norway
[2] Univ Copenhagen, Copenhagen, Denmark
关键词
Neural network compression; pruning; overparameterization; Wald test; MAXIMUM-LIKELIHOOD;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks have achieved dramatic improvements in recent years and depict the state-of-the-art methods for many real-world tasks nowadays. One drawback is, however. that many of these models are overparameterized, which makes them both computationally and memory intensive. Furthermore, overparameterization can also lead to undesired overfilling side-effects. Inspired by recently proposed magnitude-based pruning schemes and the Wald test from the field of statistics, we introduce a novel magnitude and uncertainty (M&U) pruning criterion that helps to lessen such shortcomings. One important. advantage of our M&U pruning criterion is that it is scale-invariant, a phenomenon that the magnitude-based pruning criterion suffers from. In addition, we present a "pseudo bootstrap" scheme, which can efficiently estimate the uncertainty of the weights by using their update information during training. Our experimental evaluation, which is based on various neural network architectures and datasets, shows that our new criterion leads to more compressed models compared to models that are solely based on magnitude-based pruning criteria, with, at the same time, less loss in predictive power.
引用
收藏
页码:2317 / 2326
页数:10
相关论文
共 50 条
  • [21] The magnitude of the diagonal elements in neural networks
    DeWilde, P
    NEURAL NETWORKS, 1997, 10 (03) : 499 - 504
  • [22] Iterative clustering pruning for convolutional neural networks
    Chang, Jingfei
    Lu, Yang
    Xue, Ping
    Xu, Yiqun
    Wei, Zhen
    KNOWLEDGE-BASED SYSTEMS, 2023, 265
  • [23] Leveraging Structured Pruning of Convolutional Neural Networks
    Tessier, Hugo
    Gripon, Vincent
    Leonardon, Mathieu
    Arzel, Matthieu
    Bertrand, David
    Hannagan, Thomas
    2022 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2022, : 174 - 179
  • [24] On rule pruning using fuzzy neural networks
    Department of Computer Science, Regional Engineering College, Durgapur, W.B., India
    Fuzzy Sets Syst, 3 (335-347):
  • [25] Flattening Layer Pruning in Convolutional Neural Networks
    Jeczmionek, Ernest
    Kowalski, Piotr A.
    SYMMETRY-BASEL, 2021, 13 (07):
  • [26] DyPrune: Dynamic Pruning Rates for Neural Networks
    Aires Jonker, Richard Adolph
    Poudel, Roshan
    Fajarda, Olga
    Oliveira, Jose Luis
    Lopes, Rui Pedro
    Matos, Sergio
    PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT I, 2023, 14115 : 146 - 157
  • [27] Activation-Based Pruning of Neural Networks
    Ganguli, Tushar
    Chong, Edwin K. P.
    Werner, Frank
    ALGORITHMS, 2024, 17 (01)
  • [28] Sparse optimization guided pruning for neural networks
    Shi, Yong
    Tang, Anda
    Niu, Lingfeng
    Zhou, Ruizhi
    NEUROCOMPUTING, 2024, 574
  • [29] Structured Pruning of Deep Convolutional Neural Networks
    Anwar, Sajid
    Hwang, Kyuyeon
    Sung, Wonyong
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2017, 13 (03)
  • [30] Structured pruning of neural networks for constraints learning
    Cacciola, Matteo
    Frangioni, Antonio
    Lodi, Andrea
    OPERATIONS RESEARCH LETTERS, 2024, 57