Training Deep Neural Networks with Constrained Learning Parameters

被引:0
|
作者
Date, Prasanna [1 ]
Carothers, Christopher D. [1 ]
Mitchell, John E. [2 ]
Hendler, James A. [1 ]
Magdon-Ismail, Malik [1 ]
机构
[1] Rensselaer Polytech Inst, Dept Comp Sci, Troy, NY 12180 USA
[2] Rensselaer Polytech Inst, Dept Math Sci, Troy, NY 12180 USA
关键词
Deep Neural Networks; Training Algorithm; Deep Learning; Machine Learning; Artificial Intelligence; LOCAL SEARCH; OPTIMIZATION;
D O I
10.1109/ICRC2020.2020.00018
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Today's deep learning models are primarily trained on CPUs and GPUs. Although these models tend to have low error, they consume high power and utilize large amount of memory owing to double precision floating point learning parameters. Beyond the Moore's law, a significant portion of deep learning tasks would run on edge computing systems, which will form an indispensable part of the entire computation fabric. Subsequently, training deep learning models for such systems will have to be tailored and adopted to generate models that have the following desirable characteristics: low error, low memory, and low power. We believe that deep neural networks (DNNs), where learning parameters are constrained to have a set of finite discrete values, running on neuromorphic computing systems would be instrumental for intelligent edge computing systems having these desirable characteristics. To this extent, we propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), that leverages a coordinate gradient descent-based approach for training deep learning models with finite discrete learning parameters. Next, we elaborate on the theoretical underpinnings and evaluate the computational complexity of CoNNTrA. As a proof of concept, we use CoNNTrA to train deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets and compare their performance to the same models trained using Backpropagation. We use following performance metrics for the comparison: (i) Training error; (ii) Validation error; (iii) Memory usage; and (iv) Training time. Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.
引用
收藏
页码:107 / 115
页数:9
相关论文
共 50 条
  • [41] Online Deep Learning: Learning Deep Neural Networks on the Fly
    Sahoo, Doyen
    Pham, Quang
    Lu, Jing
    Hoi, Steven C. H.
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2660 - 2666
  • [42] Learning with Deep Photonic Neural Networks
    Leelar, Bhawani Shankar
    Shivaleela, E. S.
    Srinivas, T.
    2017 IEEE WORKSHOP ON RECENT ADVANCES IN PHOTONICS (WRAP), 2017,
  • [43] Deep Learning with Random Neural Networks
    Gelenbe, Erol
    Yin, Yongha
    PROCEEDINGS OF SAI INTELLIGENT SYSTEMS CONFERENCE (INTELLISYS) 2016, VOL 2, 2018, 16 : 450 - 462
  • [44] Deep Learning with Random Neural Networks
    Gelenbe, Erol
    Yin, Yongha
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 1633 - 1638
  • [45] Deep learning in spiking neural networks
    Tavanaei, Amirhossein
    Ghodrati, Masoud
    Kheradpisheh, Saeed Reza
    Masquelier, Timothee
    Maida, Anthony
    NEURAL NETWORKS, 2019, 111 : 47 - 63
  • [46] Deep learning in neural networks: An overview
    Schmidhuber, Juergen
    NEURAL NETWORKS, 2015, 61 : 85 - 117
  • [47] Artificial neural networks and deep learning
    Geubbelmans, Melvin
    Rousseau, Axel-Jan
    Burzykowski, Tomasz
    Valkenborg, Dirk
    AMERICAN JOURNAL OF ORTHODONTICS AND DENTOFACIAL ORTHOPEDICS, 2024, 165 (02) : 248 - 251
  • [48] Shortcut learning in deep neural networks
    Robert Geirhos
    Jörn-Henrik Jacobsen
    Claudio Michaelis
    Richard Zemel
    Wieland Brendel
    Matthias Bethge
    Felix A. Wichmann
    Nature Machine Intelligence, 2020, 2 : 665 - 673
  • [49] Fast learning in Deep Neural Networks
    Chandra, B.
    Sharma, Rajesh K.
    NEUROCOMPUTING, 2016, 171 : 1205 - 1215
  • [50] Deep associative learning for neural networks
    Liu, Jia
    Zhang, Wenhua
    Liu, Fang
    Xiao, Liang
    NEUROCOMPUTING, 2021, 443 (443) : 222 - 234