EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks

被引:26
|
作者
Salehinejad, Hojjat [1 ]
Valaee, Shahrokh [1 ]
机构
[1] Univ Toronto, Dept Elect & Comp Engn, Toronto, ON M5S 3G4, Canada
关键词
Dropout; energy-based models (EBMs); pruning deep neural networks (DNNs);
D O I
10.1109/TNNLS.2021.3069970
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dropout is a well-known regularization method by sampling a sub-network from a larger deep neural network and training different sub-networks on different subsets of the data. Inspired by the dropout concept, we propose EDropout as an energy-based framework for pruning neural networks in classification tasks. In this approach, a set of binary pruning state vectors (population) represents a set of corresponding sub-networks from an arbitrary original neural network. An energy loss function assigns a scalar energy loss value to each pruning state. The energy-based model (EBM) stochastically evolves the population to find states with lower energy loss. The best pruning state is then selected and applied to the original network. Similar to dropout, the kept weights are updated using backpropagation in a probabilistic model. The EBM again searches for better pruning states and the cycle continuous. This procedure is a switching between the energy model, which manages the pruning states, and the probabilistic model, which updates the kept weights, in each iteration. The population can dynamically converge to a pruning state. This can be interpreted as dropout leading to pruning the network. From an implementation perspective, unlike most of the pruning methods, EDropout can prune neural networks without manually modifying the network architecture code. We have evaluated the proposed method on different flavors of ResNets, AlexNet, l(1) pruning, ThinNet, ChannelNet, and SqueezeNet on the Kuzushiji, Fashion, CIFAR-10, CIFAR100, Flowers, and ImageNet data sets, and compared the pruning rate and classification performance of the models. The networks trained with EDropout on average achieved a pruning rate of more than 50% of the trainable parameters with approximately < 5% and < 1% drop of Top-1 and Top-5 classification accuracy, respectively.
引用
收藏
页码:5279 / 5292
页数:14
相关论文
共 50 条
  • [1] A FRAMEWORK FOR PRUNING DEEP NEURAL NETWORKS USING ENERGY-BASED MODELS
    Salehinejad, Hojjat
    Valaee, Shahrokh
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3920 - 3924
  • [2] DEEP LEARNING BASED METHOD FOR PRUNING DEEP NEURAL NETWORKS
    Li, Lianqiang
    Zhu, Jie
    Sun, Ming-Ting
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 312 - 317
  • [3] Fine-Tuning Dropout Regularization in Energy-Based Deep Learning
    de Rosa, Gustavo H.
    Roder, Mateus
    Papa, Joao P.
    [J]. PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2021, 2021, 12702 : 99 - 108
  • [4] Selective Dropout for Deep Neural Networks
    Barrow, Erik
    Eastwood, Mark
    Jayne, Chrisina
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2016, PT III, 2016, 9949 : 519 - 528
  • [5] Methods for Pruning Deep Neural Networks
    Vadera, Sunil
    Ameen, Salem
    [J]. IEEE ACCESS, 2022, 10 : 63280 - 63300
  • [6] On Energy-Based Models with Overparametrized Shallow Neural Networks
    Domingo-Enrich, Carles
    Bietti, Alberto
    Vanden-Eijnden, Eric
    Bruna, Joan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [7] Heuristic-based automatic pruning of deep neural networks
    Choudhary, Tejalal
    Mishra, Vipul
    Goswami, Anurag
    Sarangapani, Jagannathan
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (06): : 4889 - 4903
  • [8] Heuristic-based automatic pruning of deep neural networks
    Tejalal Choudhary
    Vipul Mishra
    Anurag Goswami
    Jagannathan Sarangapani
    [J]. Neural Computing and Applications, 2022, 34 : 4889 - 4903
  • [9] Gradient and Magnitude Based Pruning for Sparse Deep Neural Networks
    Belay, Kaleab
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 13126 - 13127
  • [10] Energy-Based Clustering for Pruning Heterogeneous Ensembles
    Cela, Javier
    Suarez, Alberto
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT I, 2018, 11139 : 346 - 351