Dimensionality reduced training by pruning and freezing parts of a deep neural network: a survey

被引:4
|
作者
Wimmer, Paul [1 ,2 ]
Mehnert, Jens [1 ]
Condurache, Alexandru Paul [1 ,2 ]
机构
[1] Robert Bosch GmbH, Automated Driving Res, Burgenlandstr 44, D-70469 Stuttgart, Germany
[2] Univ Lubeck, Inst Signal Proc, Ratzeburger Allee 160, D-23562 Lubeck, Germany
关键词
Pruning; Freezing; Lottery ticket hypothesis; Dynamic sparse training; Pruning at initialization; EXTREME LEARNING-MACHINE;
D O I
10.1007/s10462-023-10489-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
State-of-the-art deep learning models have a parameter count that reaches into the billions. Training, storing and transferring such models is energy and time consuming, thus costly. A big part of these costs is caused by training the network. Model compression lowers storage and transfer costs, and can further make training more efficient by decreasing the number of computations in the forward and/or backward pass. Thus, compressing networks also at training time while maintaining a high performance is an important research topic. This work is a survey on methods which reduce the number of trained weights in deep learning models throughout the training. Most of the introduced methods set network parameters to zero which is called pruning. The presented pruning approaches are categorized into pruning at initialization, lottery tickets and dynamic sparse training. Moreover, we discuss methods that freeze parts of a network at its random initialization. By freezing weights, the number of trainable parameters is shrunken which reduces gradient computations and the dimensionality of the model's optimization space. In this survey we first propose dimensionality reduced training as an underlying mathematical model that covers pruning and freezing during training. Afterwards, we present and discuss different dimensionality reduced training methods-with a strong focus on unstructured pruning and freezing methods.
引用
下载
收藏
页码:14257 / 14295
页数:39
相关论文
共 50 条
  • [21] FVW: Finding ValuableWeight on Deep Neural Network for Model Pruning
    Zhu, Zhiyu
    Chen, Huaming
    Jin, Zhibo
    Wang, Xinyi
    Zhang, Jiayu
    Xue, Minhui
    Lu, Qinghua
    Shen, Jun
    Choo, Kim-Kwang Raymond
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3657 - 3666
  • [22] A Threshold Neuron Pruning for a Binarized Deep Neural Network on an FPGA
    Fujii, Tomoya
    Sato, Shimpei
    Nakahara, Hiroki
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2018, E101D (02): : 376 - 386
  • [23] An Incremental Scheme with Weight Pruning to Train Deep Neural Network
    Guo, Haonan
    Yan, Zhicong
    Yang, Jichao
    Li, Shenghong
    COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, CSPS 2018, VOL III: SYSTEMS, 2020, 517 : 295 - 302
  • [24] Incremental Deep Neural Network Pruning based on Hessian Approximation
    Li, Li
    Li, Zhu
    Li, Yue
    Kathariya, Birendra
    Bhattacharyya, Shuvra
    2019 DATA COMPRESSION CONFERENCE (DCC), 2019, : 590 - 590
  • [25] An FSCV Deep Neural Network: Development, Pruning, and Acceleration on an FPGA
    Zhang, Zhichao
    Oh, Yoonbae
    Adams, Scott D.
    Bennet, Kevin E.
    Kouzani, Abbas Z.
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (06) : 2248 - 2259
  • [26] Pruning Deep Neural Network Models of Guitar Distortion Effects
    Sudholt, David
    Wright, Alec
    Erkut, Cumhur
    Valimaki, Vesa
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 256 - 264
  • [27] Deep neural network pruning algorithm based on particle swarm
    Zhang, Shengnan
    Hong, Shanshan
    Wu, Chao
    Liu, Yu
    Ju, Xiaoming
    2020 INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND HUMAN-COMPUTER INTERACTION (ICHCI 2020), 2020, : 367 - 371
  • [28] Visualization in Deep Neural Network Training
    Kollias, Stefanos
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2022, 31 (03)
  • [29] Adaptive approximation and generalization of deep neural network with intrinsic dimensionality
    Nakada, Ryumei
    Imaizumi, Masaaki
    Journal of Machine Learning Research, 2020, 21
  • [30] Adaptive Approximation and Generalization of Deep Neural Network with Intrinsic Dimensionality
    Nakada, Ryumei
    Imaizumi, Masaaki
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21