Overview of Deep Convolutional Neural Network Pruning

被引:1
|
作者
Li, Guang [1 ]
Liu, Fang [1 ]
Xia, Yuping [1 ]
机构
[1] Natl Univ Def Technol, Coll Elect Sci & Technol, Natl Key Lab Sci & Technol Automat Target Recogni, Changsha 410073, Peoples R China
关键词
neural network; model compression; network pruning;
D O I
10.1117/12.2580086
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, due to the rapid development of deep convolutional neural networks, deep learning model inference needs to consume a lot of computing resources. Most current edge devices cannot support deep learning applications with low latency, low power consumption, and high accuracy due to limited resources. Deep learning applications. Therefore, model compression and acceleration of deep networks are an effective solution, and network pruning that simplifies the model by removing redundant parameters in the inference stage is a hot research in this field in recent years. This paper divides the work into six aspects for a detailed analysis, combs the latest progress of deep neural network pruning technology from the perspective of different granular pruning and weight measurement standards, and finally points out the problems in the current research and analyzes Future research directions in the field of pruning.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Convolutional Neural Network Channel Pruning Based on Regularized Sparse
    Bao, Chun
    Yu, Chongchong
    Xie, Tao
    Hu, Xinyu
    [J]. 2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP 2019), 2019, : 679 - 684
  • [32] Adaptive Pruning of Transfer Learned Deep Convolutional Neural Network for Classification of Cervical Pap Smear Images
    Wang, Pin
    Wang, Jiaxin
    Li, Yongming
    Li, Linyu
    Zhang, Hehua
    [J]. IEEE ACCESS, 2020, 8 : 50674 - 50683
  • [33] Accelerating deep neural network filter pruning with mask-aware convolutional computations on modern CPUs
    Ma, Xiu
    Li, Guangli
    Liu, Lei
    Liu, Huaxiao
    Wang, Xueying
    [J]. NEUROCOMPUTING, 2022, 505 : 375 - 387
  • [34] Lossless Reconstruction of Convolutional Neural Network for Channel-Based Network Pruning
    Lee, Donghyeon
    Lee, Eunho
    Hwang, Youngbae
    [J]. SENSORS, 2023, 23 (04)
  • [35] Filter Pruning for Efficient Transfer Learning in Deep Convolutional Neural Networks
    Reinhold, Caique
    Roisenberg, Mauro
    [J]. ARTIFICIAL INTELLIGENCEAND SOFT COMPUTING, PT I, 2019, 11508 : 191 - 202
  • [36] Acceleration of Deep Convolutional Neural Networks Using Adaptive Filter Pruning
    Singh, Pravendra
    Verma, Vinay Kumar
    Rai, Piyush
    Namboodiri, Vinay P.
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (04) : 838 - 847
  • [37] Neuroplasticity-Based Pruning Method for Deep Convolutional Neural Networks
    Camacho, Jose David
    Villasenor, Carlos
    Lopez-Franco, Carlos
    Arana-Daniel, Nancy
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (10):
  • [38] Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks
    Mittal, Deepak
    Bhardwaj, Shweta
    Khapra, Mitesh M.
    Ravindran, Balaraman
    [J]. 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 848 - 857
  • [39] Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration
    He, Yang
    Ding, Yuhang
    Liu, Ping
    Zhu, Linchao
    Zhang, Hanwang
    Yang, Yi
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2006 - 2015
  • [40] Pruning and quantization for deep neural network acceleration: A survey
    Liang, Tailin
    Glossner, John
    Wang, Lei
    Shi, Shaobo
    Zhang, Xiaotong
    [J]. NEUROCOMPUTING, 2021, 461 : 370 - 403