A Compression-Driven Training Framework for Embedded Deep Neural Networks

被引:1
|
作者
Grimaldi, Matteo [1 ]
Pugliese, Federico [1 ]
Tenace, Valerio [1 ]
Calimera, Andrea [1 ]
机构
[1] Politecn Torino, I-10129 Turin, Italy
关键词
Deep Learning; Learning Algorithms; Compression;
D O I
10.1145/3285017.3285021
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) are brain-inspired machine learning methods designed to recognize key patterns from raw data. State-of-the-art DNNs, even the simplest ones, require a huge amount of memory to store and retrieve data during computation. This prevents a practical mapping onto platforms with very limited resources, like those deployed in the end-nodes of the Internet-of-Things (IoT). The aim of this paper is to describe an efficient compression-driven training framework for embedded DNNs. The learning algorithm, which consists of a modified version of the Stochastic Gradient Descent, limits the original training space in a (-sigma, 0, sigma) ternarized subspace, with sigma a hyperparameter learned layer-wise. Tested on medium- and large-scale DNNs, both Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), the obtained sigma-ary DNNs enable an efficient use of sparse-matrix representations, hence high compression rate (up to 77x for CNN and 95x for RNNs), at the cost of a limited accuracy loss.
引用
收藏
页码:45 / 50
页数:6
相关论文
共 50 条
  • [1] Compression-Driven Progress in Science
    Pape, Leo
    [J]. ARTRIFICIAL GENERAL INTELLIGENCE, AGI 2010, 2010, 10 : 192 - 193
  • [2] Compression-driven collapse of nanotubes
    Li, Hao
    Li, Ming
    Li, Fengwei
    Kang, Zhan
    [J]. NANOTECHNOLOGY, 2020, 31 (02)
  • [3] Deep Energy: Task Driven Training of Deep Neural Networks
    Golts, Alona
    Freedman, Daniel
    Elad, Michael
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2021, 15 (02) : 324 - 338
  • [4] A Novel Compression-Driven Lightweight Framework for Medical Skeleton Model Visualization
    Zhou, Wen
    Jia, Jinyuan
    Su, Xin
    [J]. IEEE ACCESS, 2018, 6 : 47627 - 47635
  • [5] AutoAssist: A Framework to Accelerate Training of Deep Neural Networks
    Zhang, Jiong
    Yu, Hsiang-Fu
    Dhillon, Inderjit S.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] DeepTrain: A Programmable Embedded Platform for Training Deep Neural Networks
    Kim, Duckhwan
    Na, Taesik
    Yalamanchili, Sudhakar
    Mukhopadhyay, Saibal
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2018, 37 (11) : 2360 - 2370
  • [7] Framework for the Training of Deep Neural Networks in TensorFlow Using Metaheuristics
    Munoz-Ordonez, Julian
    Cobos, Carlos
    Mendoza, Martha
    Herrera-Viedma, Enrique
    Herrera, Francisco
    Tabik, Siham
    [J]. INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2018, PT I, 2018, 11314 : 801 - 811
  • [8] Deep Neural Networks Characterization Framework for Efficient Implementation on Embedded Systems
    Ali, Nermine
    Philippe, Jean-Marc
    Tain, Benoit
    Peyret, Thomas
    Coussy, Philippe
    [J]. 2020 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2020, : 219 - 224
  • [9] CDFI: Compression-Driven Network Design for Frame Interpolation
    Ding, Tianyu
    Liang, Luming
    Zhu, Zhihui
    Zharkov, Ilya
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7997 - 8007
  • [10] ISING DROPOUT WITH NODE GROUPING FOR TRAINING AND COMPRESSION OF DEEP NEURAL NETWORKS
    Salehinejad, Hojjat
    Wang, Zijian
    Valaee, Shahrokh
    [J]. 2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,