Studying the plasticity in deep convolutional neural networks using random pruning

被引:0
|
作者
Deepak Mittal
Shweta Bhardwaj
Mitesh M. Khapra
Balaraman Ravindran
机构
[1] Indian Institute of Technology Madras,Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI (RBC
来源
关键词
Deep learning; Filter pruning; Model compression; Convolutional neural networks;
D O I
暂无
中图分类号
学科分类号
摘要
Recently, there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_1$$\end{document}-norm, average percentage of zeros, etc.) and retain only the top-ranked filters. Once the low-scoring filters are pruned away, the remainder of the network is fine-tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen, but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counterintuitive results wherein by randomly pruning 25–50% filters from deep CNNs we are able to obtain the same performance as obtained by using state-of-the-art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real-world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class-specific pruning and show that even here a random pruning strategy gives close to state-of-the-art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection and image segmentation. We show that using a simple random pruning strategy, we can achieve significant speedup in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster-RCNN model. Similarly, we show that the performance of a pruned segmentation network is actually very similar to that of the original unpruned SegNet.
引用
收藏
页码:203 / 216
页数:13
相关论文
共 50 条
  • [1] Studying the plasticity in deep convolutional neural networks using random pruning
    Mittal, Deepak
    Bhardwaj, Shweta
    Khapra, Mitesh M.
    Ravindran, Balaraman
    [J]. MACHINE VISION AND APPLICATIONS, 2019, 30 (02) : 203 - 216
  • [2] Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks
    Mittal, Deepak
    Bhardwaj, Shweta
    Khapra, Mitesh M.
    Ravindran, Balaraman
    [J]. 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 848 - 857
  • [3] Structured Pruning of Deep Convolutional Neural Networks
    Anwar, Sajid
    Hwang, Kyuyeon
    Sung, Wonyong
    [J]. ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2017, 13 (03)
  • [4] Activation Pruning of Deep Convolutional Neural Networks
    Ardakani, Arash
    Condo, Carlo
    Gross, Warren J.
    [J]. 2017 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2017), 2017, : 1325 - 1329
  • [5] Acceleration of Deep Convolutional Neural Networks Using Adaptive Filter Pruning
    Singh, Pravendra
    Verma, Vinay Kumar
    Rai, Piyush
    Namboodiri, Vinay P.
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (04) : 838 - 847
  • [6] Compression of Deep Convolutional Neural Networks Using Effective Channel Pruning
    Guo, Qingbei
    Wu, Xiao-Jun
    Zhao, Xiuyang
    [J]. IMAGE AND GRAPHICS, ICIG 2019, PT I, 2019, 11901 : 760 - 772
  • [7] Incremental Filter Pruning via Random Walk for Accelerating Deep Convolutional Neural Networks
    Li, Qinghua
    Li, Cuiping
    Chen, Hong
    [J]. PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 358 - 366
  • [8] Structured Pruning for Deep Convolutional Neural Networks: A Survey
    He, Yang
    Xiao, Lingao
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2900 - 2919
  • [9] Pruning Deep Convolutional Neural Networks Architectures with Evolution Strategy
    Fernandes, Francisco E., Jr.
    Yen, Gary G.
    [J]. INFORMATION SCIENCES, 2021, 552 : 29 - 47
  • [10] Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks
    He, Yang
    Dong, Xuanyi
    Kang, Guoliang
    Fu, Yanwei
    Yan, Chenggang
    Yang, Yi
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (08) : 3594 - 3604