Model Compression for Deep Neural Networks: A Survey

被引:38
|
作者
Li, Zhuo [1 ]
Li, Hengyi [1 ]
Meng, Lin [2 ]
机构
[1] Ritsumeikan Univ, Grad Sch Sci & Engn, 1-1-1 Noji Higashi, Kusatsu 5258577, Japan
[2] Ritsumeikan Univ, Coll Sci & Engn, 1-1-1 Noji Higashi, Kusatsu 5258577, Japan
关键词
deep neural networks; model compression; model pruning; parameter quantization; low-rank decomposition; knowledge distillation; lightweight model design; KNOWLEDGE;
D O I
10.3390/computers12030060
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Currently, with the rapid development of deep learning, deep neural networks (DNNs) have been widely applied in various computer vision tasks. However, in the pursuit of performance, advanced DNN models have become more complex, which has led to a large memory footprint and high computation demands. As a result, the models are difficult to apply in real time. To address these issues, model compression has become a focus of research. Furthermore, model compression techniques play an important role in deploying models on edge devices. This study analyzed various model compression methods to assist researchers in reducing device storage space, speeding up model inference, reducing model complexity and training costs, and improving model deployment. Hence, this paper summarized the state-of-the-art techniques for model compression, including model pruning, parameter quantization, low-rank decomposition, knowledge distillation, and lightweight model design. In addition, this paper discusses research challenges and directions for future work.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] A survey of model compression for deep neural networks
    Li J.-Y.
    Zhao Y.-K.
    Xue Z.-E.
    Cai Z.
    Li Q.
    Gongcheng Kexue Xuebao/Chinese Journal of Engineering, 2019, 41 (10): : 1229 - 1239
  • [2] Deep neural networks compression: A comparative survey and choice recommendations
    Marino, Giosue Cataldo
    Petrini, Alessandro
    Malchiodi, Dario
    Frasca, Marco
    NEUROCOMPUTING, 2023, 520 : 152 - 170
  • [3] Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey
    Deng, Lei
    Li, Guoqi
    Han, Song
    Shi, Luping
    Xie, Yuan
    PROCEEDINGS OF THE IEEE, 2020, 108 (04) : 485 - 532
  • [4] Discrete Model Compression with Resource Constraint for Deep Neural Networks
    Gao, Shangqian
    Huang, Feihu
    Pei, Jian
    Huang, Heng
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1896 - 1905
  • [5] Image compression with neural networks - A survey
    Jiang, J
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 1999, 14 (09) : 737 - 760
  • [6] Operator compression with deep neural networks
    Fabian Kröpfl
    Roland Maier
    Daniel Peterseim
    Advances in Continuous and Discrete Models, 2022
  • [7] Operator compression with deep neural networks
    Kroepfl, Fabian
    Maier, Roland
    Peterseim, Daniel
    ADVANCES IN CONTINUOUS AND DISCRETE MODELS, 2022, 2022 (01):
  • [8] Lossless Compression of Deep Neural Networks
    Serra, Thiago
    Kumar, Abhinav
    Ramalingam, Srikumar
    INTEGRATION OF CONSTRAINT PROGRAMMING, ARTIFICIAL INTELLIGENCE, AND OPERATIONS RESEARCH, CPAIOR 2020, 2020, 12296 : 417 - 430
  • [9] Compression of Deep Neural Networks on the Fly
    Soulie, Guillaume
    Gripon, Vincent
    Robert, Maelys
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2016, PT II, 2016, 9887 : 153 - 160
  • [10] Evolutionary Multi-Objective Model Compression for Deep Neural Networks
    Wang, Zhehui
    Luo, Tao
    Li, Miqing
    Zhou, Joey Tianyi
    Goh, Rick Siow Mong
    Zhen, Liangli
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2021, 16 (03) : 10 - 21