Training Deep Neural Networks with Constrained Learning Parameters

被引:0
|
作者
Date, Prasanna [1 ]
Carothers, Christopher D. [1 ]
Mitchell, John E. [2 ]
Hendler, James A. [1 ]
Magdon-Ismail, Malik [1 ]
机构
[1] Rensselaer Polytech Inst, Dept Comp Sci, Troy, NY 12180 USA
[2] Rensselaer Polytech Inst, Dept Math Sci, Troy, NY 12180 USA
关键词
Deep Neural Networks; Training Algorithm; Deep Learning; Machine Learning; Artificial Intelligence; LOCAL SEARCH; OPTIMIZATION;
D O I
10.1109/ICRC2020.2020.00018
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Today's deep learning models are primarily trained on CPUs and GPUs. Although these models tend to have low error, they consume high power and utilize large amount of memory owing to double precision floating point learning parameters. Beyond the Moore's law, a significant portion of deep learning tasks would run on edge computing systems, which will form an indispensable part of the entire computation fabric. Subsequently, training deep learning models for such systems will have to be tailored and adopted to generate models that have the following desirable characteristics: low error, low memory, and low power. We believe that deep neural networks (DNNs), where learning parameters are constrained to have a set of finite discrete values, running on neuromorphic computing systems would be instrumental for intelligent edge computing systems having these desirable characteristics. To this extent, we propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), that leverages a coordinate gradient descent-based approach for training deep learning models with finite discrete learning parameters. Next, we elaborate on the theoretical underpinnings and evaluate the computational complexity of CoNNTrA. As a proof of concept, we use CoNNTrA to train deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets and compare their performance to the same models trained using Backpropagation. We use following performance metrics for the comparison: (i) Training error; (ii) Validation error; (iii) Memory usage; and (iv) Training time. Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.
引用
收藏
页码:107 / 115
页数:9
相关论文
共 50 条
  • [31] Local Critic Training for Model-Parallel Learning of Deep Neural Networks
    Lee, Hojung
    Hsieh, Cho-Jui
    Lee, Jong-Seok
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4424 - 4436
  • [32] Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks
    Hoefler, Torsten
    Alistarh, Dan
    Ben-Nun, Tal
    Dryden, Nikoli
    Peste, Alexandra
    Journal of Machine Learning Research, 2021, 22
  • [33] An improved model training method for residual convolutional neural networks in deep learning
    Li, Xuelei
    Li, Rengang
    Zhao, Yaqian
    Zhao, Jian
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (05) : 6811 - 6821
  • [34] Accelerating the Training of Convolutional Neural Networks for Image Segmentation with Deep Active Learning
    Chen, Weitao
    Salay, Rick
    Sedwards, Sean
    Abdelzad, Vahdat
    Czarnecki, Krzysztof
    2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [35] Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
    Hoefler, Torsten
    Alistarh, Dan
    Ben-Nun, Tal
    Dryden, Nikoli
    Peste, Alexandra
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 23
  • [36] Training Deep Neural Networks for Image Applications with Noisy Labels by Complementary Learning
    Zhou Y.
    Liu Y.
    Wang R.
    2017, Science Press (54): : 2649 - 2659
  • [37] Local to Global Learning: Gradually Adding Classes for Training Deep Neural Networks
    Cheng, Hao
    Lian, Dongze
    Deng, Bowen
    Gao, Shenghua
    Tan, Tao
    Geng, Yanlin
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4743 - 4751
  • [38] Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks
    Wu, Yanzhao
    Liu, Ling
    Bae, Juhyun
    Chow, Ka-Ho
    Iyengar, Arun
    Pu, Calton
    Wei, Wenqi
    Yu, Lei
    Zhang, Qi
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 1971 - 1980
  • [39] Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks
    Iiduka, Hideaki
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (12) : 13250 - 13261
  • [40] A Deep Learning Approach for Automatic Ionogram Parameters Recognition With Convolutional Neural Networks
    Sherstyukov, Ruslan
    Moges, Samson
    Kozlovsky, Alexander
    Ulich, Thomas
    EARTH AND SPACE SCIENCE, 2024, 11 (10)