Model Compression for Deep Neural Networks: A Survey

被引:38
|
作者
Li, Zhuo [1 ]
Li, Hengyi [1 ]
Meng, Lin [2 ]
机构
[1] Ritsumeikan Univ, Grad Sch Sci & Engn, 1-1-1 Noji Higashi, Kusatsu 5258577, Japan
[2] Ritsumeikan Univ, Coll Sci & Engn, 1-1-1 Noji Higashi, Kusatsu 5258577, Japan
关键词
deep neural networks; model compression; model pruning; parameter quantization; low-rank decomposition; knowledge distillation; lightweight model design; KNOWLEDGE;
D O I
10.3390/computers12030060
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Currently, with the rapid development of deep learning, deep neural networks (DNNs) have been widely applied in various computer vision tasks. However, in the pursuit of performance, advanced DNN models have become more complex, which has led to a large memory footprint and high computation demands. As a result, the models are difficult to apply in real time. To address these issues, model compression has become a focus of research. Furthermore, model compression techniques play an important role in deploying models on edge devices. This study analyzed various model compression methods to assist researchers in reducing device storage space, speeding up model inference, reducing model complexity and training costs, and improving model deployment. Hence, this paper summarized the state-of-the-art techniques for model compression, including model pruning, parameter quantization, low-rank decomposition, knowledge distillation, and lightweight model design. In addition, this paper discusses research challenges and directions for future work.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Reducing Image Compression Artifacts for Deep Neural Networks
    Ma, Li
    Peng, Peixi
    Xing, Peiyin
    Wang, Yaowei
    Tian, Yonghong
    2021 DATA COMPRESSION CONFERENCE (DCC 2021), 2021, : 355 - 355
  • [42] Deep neural networks compression based on improved clustering
    Liu H.
    Wang Y.
    Ma Y.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2019, 36 (07): : 1130 - 1136
  • [43] Fast and Robust Compression of Deep Convolutional Neural Networks
    Wen, Jia
    Yang, Liu
    Shen, Chenyang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 52 - 63
  • [44] Improvement of Image Compression Performance by Deep Neural Networks
    Vasylenko, Dmytro
    Stirenko, Sergii
    Gordienko, Yuri
    IEEE EUROCON 2021 - 19TH INTERNATIONAL CONFERENCE ON SMART TECHNOLOGIES, 2021, : 135 - 139
  • [45] Deep OCT image compression with convolutional neural networks
    Guo, Pengfei
    Li, Dawei
    Li, Xingde
    BIOMEDICAL OPTICS EXPRESS, 2020, 11 (07): : 3543 - 3554
  • [46] Joint matrix decomposition for deep convolutional neural networks compression
    Chen, Shaowu
    Zhou, Jiahao
    Sun, Weize
    Huang, Lei
    NEUROCOMPUTING, 2023, 516 : 11 - 26
  • [47] Evolutionary Compression of Deep Neural Networks for Biomedical Image Segmentation
    Zhou, Yao
    Yen, Gary G.
    Yi, Zhang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (08) : 2916 - 2929
  • [48] Enhancing the Robustness of Deep Neural Networks from "Smart" Compression
    Liu, Tao
    Liu, Zihao
    Liu, Qi
    Wen, Wujie
    2018 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI), 2018, : 528 - 532
  • [49] SqueezeBlock: A Transparent Weight Compression Scheme for Deep Neural Networks
    Song, Mo
    Wu, Jiajun
    Ding, Yuhao
    So, Hayden Kwok-Hay
    2023 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY, ICFPT, 2023, : 238 - 243
  • [50] Compression of Deep Neural Networks with Structured Sparse Ternary Coding
    Boo, Yoonho
    Sung, Wonyong
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2019, 91 (09): : 1009 - 1019