Dimensionality reduced training by pruning and freezing parts of a deep neural network: a survey

被引:4
|
作者
Wimmer, Paul [1 ,2 ]
Mehnert, Jens [1 ]
Condurache, Alexandru Paul [1 ,2 ]
机构
[1] Robert Bosch GmbH, Automated Driving Res, Burgenlandstr 44, D-70469 Stuttgart, Germany
[2] Univ Lubeck, Inst Signal Proc, Ratzeburger Allee 160, D-23562 Lubeck, Germany
关键词
Pruning; Freezing; Lottery ticket hypothesis; Dynamic sparse training; Pruning at initialization; EXTREME LEARNING-MACHINE;
D O I
10.1007/s10462-023-10489-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
State-of-the-art deep learning models have a parameter count that reaches into the billions. Training, storing and transferring such models is energy and time consuming, thus costly. A big part of these costs is caused by training the network. Model compression lowers storage and transfer costs, and can further make training more efficient by decreasing the number of computations in the forward and/or backward pass. Thus, compressing networks also at training time while maintaining a high performance is an important research topic. This work is a survey on methods which reduce the number of trained weights in deep learning models throughout the training. Most of the introduced methods set network parameters to zero which is called pruning. The presented pruning approaches are categorized into pruning at initialization, lottery tickets and dynamic sparse training. Moreover, we discuss methods that freeze parts of a network at its random initialization. By freezing weights, the number of trainable parameters is shrunken which reduces gradient computations and the dimensionality of the model's optimization space. In this survey we first propose dimensionality reduced training as an underlying mathematical model that covers pruning and freezing during training. Afterwards, we present and discuss different dimensionality reduced training methods-with a strong focus on unstructured pruning and freezing methods.
引用
下载
收藏
页码:14257 / 14295
页数:39
相关论文
共 50 条
  • [41] Fused Pruning based Robust Deep Neural Network Watermark Embedding
    Li, Tengfei
    Wang, Shuo
    Jing, Huiyun
    Lian, Zhichao
    Meng, Shunmei
    Li, Qianmu
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2475 - 2481
  • [42] Deep Neural Network Compression by In-Parallel Pruning-Quantization
    Tung, Frederick
    Mori, Greg
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (03) : 568 - 579
  • [43] Cooperative Pruning in Cross-Domain Deep Neural Network Compression
    Chen, Shangyu
    Wang, Wenya
    Pan, Sinno Jialin
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2102 - 2108
  • [44] Structural Watermarking to Deep Neural Networks via Network Channel Pruning
    Zhao, Xiangyu
    Yao, Yinzhe
    Wu, Hanzhou
    Zhang, Xinpeng
    2021 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2021, : 14 - 19
  • [45] PRF: deep neural network compression by systematic pruning of redundant filters
    Sarvani, C.H.
    Ghorai, Mrinmoy
    Basha, S. H. Shabbeer
    Neural Computing and Applications, 2024, 36 (33) : 20607 - 20616
  • [46] Survey on Network of Distributed Deep Learning Training
    Zhu H.
    Yuan G.
    Yao C.
    Tan G.
    Wang Z.
    Hu Z.
    Zhang X.
    An X.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (01): : 98 - 115
  • [47] A survey of artificial neural network training tools
    Darío Baptista
    Fernando Morgado-Dias
    Neural Computing and Applications, 2013, 23 : 609 - 615
  • [48] Distributed Graph Neural Network Training: A Survey
    Shao, Yingxia
    Li, Hongzheng
    Gu, Xizhi
    Yin, Hongbo
    Li, Yawen
    Miao, Xupeng
    Zhang, Wentao
    Cui, Bin
    Chen, Lei
    ACM COMPUTING SURVEYS, 2024, 56 (08)
  • [49] A survey of artificial neural network training tools
    Baptista, Dario
    Morgado-Dias, Fernando
    NEURAL COMPUTING & APPLICATIONS, 2013, 23 (3-4): : 609 - 615
  • [50] Memory Efficient Deep Neural Network Training
    Shilova, Alena
    EURO-PAR 2021: PARALLEL PROCESSING WORKSHOPS, 2022, 13098 : 515 - 519