FP-AGL: Filter Pruning With Adaptive Gradient Learning for Accelerating Deep Convolutional Neural Networks

被引:14
|
作者
Kim, Nam Joon [1 ,2 ]
Kim, Hyun [1 ,2 ]
机构
[1] Seoul Natl Univ Sci & Technol, Dept Elect & Informat Engn, Seoul 01811, South Korea
[2] Seoul Natl Univ Sci & Technol, Res Ctr Elect & Informat Technol, Seoul 01811, South Korea
基金
新加坡国家研究基金会;
关键词
Adaptive gradient learning; convolutional neural networks; filter pruning; light-weight technique; taylor expansion; CNN;
D O I
10.1109/TMM.2022.3189496
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Filter pruning is a technique that reduces computational complexity, inference time, and memory footprint by removing unnecessary filters in convolutional neural networks (CNNs) with an acceptable drop in accuracy, consequently accelerating the network. Unlike traditional filter pruning methods utilizing zeroing-out filters, we propose two techniques to achieve the effect of pruning more filters with less performance degradation, inspired by the existing research on centripetal stochastic gradient descent (C-SGD), wherein the filters are removed only when the ones that need to be pruned have the same value. First, to minimize the negative effect of centripetal vectors that gradually make filters come closer to each other, we redesign the vectors by considering the effect of each vector on the loss-function using the Taylor-based method. Second, we propose an adaptive gradient learning (AGL) technique that updates weights while adaptively changing the gradients. Through AGL, performance degradation can be mitigated because some gradients maintain their original direction, and AGL also minimizes the accuracy loss by perfectly converging the filters, which require pruning, to a single point. Finally, we demonstrate the superiority of the proposed method on various datasets and networks. In particular, on the ILSVRC-2012 dataset, our method removed 52.09% FLOPs with a negligible 0.15% top-1 accuracy drop on ResNet-50. As a result, we achieve the most outstanding performance compared to those reported in previous studies in terms of the trade-off between accuracy and computational complexity.
引用
收藏
页码:5279 / 5290
页数:12
相关论文
共 50 条
  • [1] Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
    He, Yang
    Kang, Guoliang
    Dong, Xuanyi
    Fu, Yanwei
    Yang, Yi
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2234 - 2240
  • [2] Acceleration of Deep Convolutional Neural Networks Using Adaptive Filter Pruning
    Singh, Pravendra
    Verma, Vinay Kumar
    Rai, Piyush
    Namboodiri, Vinay P.
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (04) : 838 - 847
  • [3] Filter Pruning for Efficient Transfer Learning in Deep Convolutional Neural Networks
    Reinhold, Caique
    Roisenberg, Mauro
    ARTIFICIAL INTELLIGENCEAND SOFT COMPUTING, PT I, 2019, 11508 : 191 - 202
  • [4] Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration
    He, Yang
    Ding, Yuhang
    Liu, Ping
    Zhu, Linchao
    Zhang, Hanwang
    Yang, Yi
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2006 - 2015
  • [5] Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
    You, Zhonghui
    Yan, Kun
    Ye, Jinmian
    Ma, Meng
    Wang, Ping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] Incremental Filter Pruning via Random Walk for Accelerating Deep Convolutional Neural Networks
    Li, Qinghua
    Li, Cuiping
    Chen, Hong
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 358 - 366
  • [7] Soft Taylor Pruning for Accelerating Deep Convolutional Neural Networks
    Rong, Jintao
    Yu, Xiyi
    Zhang, Mingyang
    Ou, Linlin
    IECON 2020: THE 46TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2020, : 5343 - 5349
  • [8] Channel pruning based on mean gradient for accelerating Convolutional Neural Networks
    Liu, Congcong
    Wu, Huaming
    SIGNAL PROCESSING, 2019, 156 : 84 - 91
  • [9] Filter Pruning via Probabilistic Model-based Optimization for Accelerating Deep Convolutional Neural Networks
    Li, Qinghua
    Li, Cuiping
    Chen, Hong
    WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, : 653 - 661
  • [10] Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks
    He, Yang
    Dong, Xuanyi
    Kang, Guoliang
    Fu, Yanwei
    Yan, Chenggang
    Yang, Yi
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (08) : 3594 - 3604