Speeding-up pruning for Artificial Neural Networks: Introducing Accelerated Iterative Magnitude Pruning

被引:8
|
作者
Zullich, Marco [1 ]
Medvet, Eric [1 ]
Pellegrino, Felice Andrea [1 ]
Ansuini, Alessio [2 ]
机构
[1] Univ Trieste, Dept Engn & Architecture, Trieste, Italy
[2] AREA Sci Pk, Trieste, Italy
关键词
Artificial Neural Network; Convolutional Neural Network; Neural Network Pruning; Magnitude Pruning; Lottery Ticket Hypothesis;
D O I
10.1109/ICPR48806.2021.9412705
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, Artificial Neural Networks (ANNs) pruning has become the focal point of many researches, due to the extreme overparametrization of such models. This has urged the scientific world to investigate methods for the simplification of the structure of weights in ANNs, mainly in an effort to reduce time for both training and inference. Frankle and Carbin In and later Renda, Frankle, and Carbin [2] introduced and refined an iterative pruning method which is able to effectively prune the network of a great portion of its parameters with little to no loss in performance. On the downside, this method requires a large amount of time for its application, since, for each iteration, the network has to be trained for (almost) the same amount of epochs of the unpruned network. In this work, we show that, for a limited setting, if targeting high overall sparsity rates, this time can be effectively reduced for each iteration, save for the last one, by more than 50 %, while yielding a final product (i.e., final pruned network) whose performance is comparable to the ANN obtained using the existing method.
引用
收藏
页码:3868 / 3875
页数:8
相关论文
共 50 条
  • [21] Dataflow-Based Pruning for Speeding up Superoptimization
    Mukherjee, Manasij
    Kant, Pranav
    Liu, Zhengyang
    Regehr, John
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2020, 4 (OOPSLA):
  • [22] A dynamic node decaying method for pruning artificial neural networks
    Shahjahan, M
    Murase, K
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2003, E86D (04): : 736 - 751
  • [23] Activity-based pruning in developmental artificial neural networks
    Rust, AG
    Adams, R
    George, S
    Bolouri, H
    FOURTH EUROPEAN CONFERENCE ON ARTIFICIAL LIFE, 1997, : 224 - 233
  • [24] Structure Optimization of Artificial Neural Networks Using Pruning Methods
    Ciganek, Jan
    Osusky, Jakub
    2018 CYBERNETICS & INFORMATICS (K&I), 2018,
  • [25] A dynamic node decaying method for pruning artificial neural networks
    Shahjahan, Md.
    Murase, Kazuyuki
    IEICE Transactions on Information and Systems, 2003, E86-D (04) : 736 - 751
  • [26] Optimal pruning in neural networks
    Barbato, DML
    Kinouchi, O
    PHYSICAL REVIEW E, 2000, 62 (06): : 8387 - 8394
  • [27] ITERATIVE PRUNING IN 2ND-ORDER RECURRENT NEURAL NETWORKS
    CASTELLANO, G
    FANELLI, AM
    PELILLO, M
    NEURAL PROCESSING LETTERS, 1995, 2 (06) : 5 - 8
  • [28] Stage-Wise Magnitude-Based Pruning for Recurrent Neural Networks
    Li, Guiying
    Yang, Peng
    Qian, Chao
    Hong, Richang
    Tang, Ke
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 1666 - 1680
  • [29] ZEROTH-ORDER TOPOLOGICAL INSIGHTS INTO ITERATIVE MAGNITUDE PRUNING
    Balwani, Aishwarya
    Krzyston, Jakob
    TOPOLOGICAL, ALGEBRAIC AND GEOMETRIC LEARNING WORKSHOPS 2022, VOL 196, 2022, 196
  • [30] Automatic Pruning Rate Derivation for Structured Pruning of Deep Neural Networks
    Sakai, Yasufumi
    Iwakawa, Akinori
    Tabaru, Tsuguchika
    Inoue, Atsuki
    Kawaguchi, Hiroshi
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2561 - 2567