Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training

被引:1
|
作者
Wiedemann, Simon [1 ]
Mehari, Temesgen [1 ]
Kepp, Kevin [1 ]
Samek, Wojciech [1 ]
机构
[1] Fraunhofer Heinrich Hertz Inst, Dept Video Coding & Analyt, Berlin, Germany
关键词
D O I
10.1109/CVPRW50498.2020.00368
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks are successful but highly computationally expensive learning systems. One of the main sources of time and energy drains is the well known backpropagation (backprop) algorithm, which roughly accounts for 2/3 of the computational cost of training. In this work we propose a method for reducing the computational complexity of backprop, which we named dithered backprop. It consists on applying a stochastic quantization scheme to intermediate results of the method. The particular quantisation scheme, called non-subtractive dither (NSD), induces sparsity which can be exploited by computing efficient sparse matrix multiplications. Experiments on popular image classification tasks show that it induces 92% sparsity on average across a wide set of models at no or negligible accuracy drop in comparison to state-of-the-art approaches, thus significantly reducing the computational complexity of the backward pass. Moreover, we show that our method is fully compatible to state-of-the-art training methods that reduce the bit-precision of training down to 8-bits, as such being able to further reduce the computational requirements. Finally we discuss and show potential benefits of applying dithered backprop on a distributed training settings, in that communication as well as compute efficiency may increase simultaneously with the number of participant nodes.
引用
收藏
页码:3096 / 3104
页数:9
相关论文
共 50 条
  • [1] Augmented Efficient BackProp for Backpropagation Learning in Deep Autoassociative Neural Networks
    Embrechts, Mark J.
    Hargis, Blake J.
    Linton, Jonathan D.
    [J]. 2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [2] Comparing backpropagation with a genetic algorithm for neural network training
    Gupta, JND
    Sexton, RS
    [J]. OMEGA-INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE, 1999, 27 (06): : 679 - 684
  • [3] A successive overrelaxation backpropagation algorithm for neural-network training
    De Leone, R
    Capparuccia, R
    Merelli, E
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1998, 9 (03): : 381 - 388
  • [4] Efficient training of backpropagation neural networks
    Otair, Mohammed A.
    Salameh, Walid A.
    [J]. NEURAL NETWORK WORLD, 2006, 16 (04) : 291 - 311
  • [5] Accelerating convolutional neural network training using ProMoD backpropagation algorithm
    Gurhanli, Ahmet
    [J]. IET IMAGE PROCESSING, 2020, 14 (13) : 2957 - 2964
  • [6] Quantized rewiring: hardware-aware training of sparse deep neural networks
    Petschenig, Horst
    Legenstein, Robert
    [J]. NEUROMORPHIC COMPUTING AND ENGINEERING, 2023, 3 (02):
  • [7] Compression algorithm for weights quantized deep neural network models
    Chen, Yun
    Cai, Xiaodong
    Liang, Xiaoxi
    Wang, Meng
    [J]. Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (02): : 132 - 138
  • [8] Backpropagation training of an optical neural network
    Steck, JE
    Skinner, SR
    Cruz-Cabrera, AA
    Yang, MT
    Behrman, EC
    [J]. PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON MICROELECTRONICS FOR NEURAL, FUZZY AND BIO-INSPIRED SYSTEMS, MICORNEURO'99, 1999, : 346 - 351
  • [9] Backprop with Approximate Activations for Memory-efficient Network Training
    Chakrabarti, Ayan
    Moseley, Benjamin
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] Memory Efficient Deep Neural Network Training
    Shilova, Alena
    [J]. EURO-PAR 2021: PARALLEL PROCESSING WORKSHOPS, 2022, 13098 : 515 - 519