Dimensionality reduced training by pruning and freezing parts of a deep neural network: a survey

被引:4
|
作者
Wimmer, Paul [1 ,2 ]
Mehnert, Jens [1 ]
Condurache, Alexandru Paul [1 ,2 ]
机构
[1] Robert Bosch GmbH, Automated Driving Res, Burgenlandstr 44, D-70469 Stuttgart, Germany
[2] Univ Lubeck, Inst Signal Proc, Ratzeburger Allee 160, D-23562 Lubeck, Germany
关键词
Pruning; Freezing; Lottery ticket hypothesis; Dynamic sparse training; Pruning at initialization; EXTREME LEARNING-MACHINE;
D O I
10.1007/s10462-023-10489-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
State-of-the-art deep learning models have a parameter count that reaches into the billions. Training, storing and transferring such models is energy and time consuming, thus costly. A big part of these costs is caused by training the network. Model compression lowers storage and transfer costs, and can further make training more efficient by decreasing the number of computations in the forward and/or backward pass. Thus, compressing networks also at training time while maintaining a high performance is an important research topic. This work is a survey on methods which reduce the number of trained weights in deep learning models throughout the training. Most of the introduced methods set network parameters to zero which is called pruning. The presented pruning approaches are categorized into pruning at initialization, lottery tickets and dynamic sparse training. Moreover, we discuss methods that freeze parts of a network at its random initialization. By freezing weights, the number of trainable parameters is shrunken which reduces gradient computations and the dimensionality of the model's optimization space. In this survey we first propose dimensionality reduced training as an underlying mathematical model that covers pruning and freezing during training. Afterwards, we present and discuss different dimensionality reduced training methods-with a strong focus on unstructured pruning and freezing methods.
引用
下载
收藏
页码:14257 / 14295
页数:39
相关论文
共 50 条
  • [31] Classification of Car Parts Using Deep Neural Network
    Khanal, Salik Ram
    Amorim, Eurico Vasco
    Filipe, Vitor
    CONTROLO 2020, 2021, 695 : 582 - 591
  • [32] A Self-organizing Neural Network Using Fast Training and Pruning
    Qiao Jun-fei
    Li Miao
    Han Hong-gui
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 1332 - 1337
  • [33] H∞ filtering in neural network training and pruning with application to system identification
    Tang, He-Sheng
    Xue, Songtao
    Sato, Tadanobu
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2007, 21 (01) : 47 - 58
  • [34] Direct Zero-Norm Minimization for Neural Network Pruning and Training
    Adam, S. P.
    Magoulas, George D.
    Vrahatis, M. N.
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, 2012, 311 : 295 - +
  • [35] RazorNet: Adversarial Training and Noise Training on a Deep Neural Network Fooled by a Shallow Neural Network
    Taheri, Shayan
    Salem, Milad
    Yuan, Jiann-Shiun
    BIG DATA AND COGNITIVE COMPUTING, 2019, 3 (03) : 1 - 17
  • [36] A framework for deep neural network multiuser authorization based on channel pruning
    Wang, Linna
    Song, Yunfei
    Zhu, Yujia
    Xia, Daoxun
    Han, Guoquan
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (21):
  • [37] RESHAPING DEEP NEURAL NETWORK FOR FAST DECODING BY NODE-PRUNING
    He, Tianxing
    Fan, Yuchen
    Qian, Yanmin
    Tan, Tian
    Yu, Kai
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [38] ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
    Luo, Jian-Hao
    Wu, Jianxin
    Lin, Weiyao
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5068 - 5076
  • [39] Deep Neural Network Channel Pruning Compression Method for Filter Elasticity
    Li, Ruiquan
    Zhu, Lu
    Liu, Yuanyuan
    Computer Engineering and Applications, 2024, 60 (06) : 163 - 171
  • [40] Group Pruning with Group Sparse Regularization for Deep Neural Network Compression
    Wu, Chenglu
    Pang, Wei
    Liu, Hao
    Lu, Shengli
    2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP 2019), 2019, : 325 - 329