Speeding up Convolutional Neural Network Training with Dynamic Precision Scaling and Flexible Multiplier-Accumulator

被引:13
|
作者
Na, Taesik [1 ]
Mukhopadhyay, Saibal [1 ]
机构
[1] Georgia Inst Technol, 266 Ferst Dr, Atlanta, GA 30332 USA
基金
美国国家科学基金会;
关键词
Convolutional neural network; Training;
D O I
10.1145/2934583.2934625
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Training convolutional neural network is a major bottleneck when developing a new neural network topology. This paper presents a dynamic precision scaling (DPS) algorithm and flexible multiplier-accumulator (MAC) to speed up convolutional neural network training. The DPS algorithm utilizes dynamic fixed point and finds good enough numerical precision for target network while training. The precision information from DPS is used to configure our proposed MAC. The proposed MAC can perform fixed point computation with variable precision mode providing differentiated computation time which enables speeding up training for lower precision computation. Simulation results show that our work can achieve 5.7x speed-up while consuming 31% energy compared to baseline for modified Alexnet on Flickr image style recognition task.
引用
收藏
页码:58 / 63
页数:6
相关论文
共 20 条
  • [1] On Fast Sample Preselection for Speeding up Convolutional Neural Network Training
    Rayar, Frederic
    Uchida, Seiichi
    [J]. STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, S+SSPR 2018, 2018, 11004 : 65 - 75
  • [2] Scaling up the training of Convolutional Neural Networks
    Snir, Marc
    [J]. 2019 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2019, : 925 - 925
  • [3] SPEEDING-UP A CONVOLUTIONAL NEURAL NETWORK BY CONNECTING AN SVM NETWORK
    Pasquet, J.
    Chaumont, M.
    Subsol, G.
    Derras, M.
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 2286 - 2290
  • [4] Dynamic Precision Multiplier For Deep Neural Network Accelerators
    Ding, Chen
    Yuxiang, Huan
    Zheng, Lirong
    Zou, Zhuo
    [J]. 2020 IEEE 33RD INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (SOCC), 2020, : 180 - 184
  • [5] DistrEdge: Speeding up Convolutional Neural Network Inference on Distributed Edge Devices
    Hou, Xueyu
    Guan, Yongjie
    Han, Tao
    Zhang, Ning
    [J]. 2022 IEEE 36TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2022), 2022, : 1097 - 1107
  • [6] Speeding up the Topography Imaging of Atomic Force Microscopy by Convolutional Neural Network
    Zheng, Peng
    He, Hao
    Gao, Yun
    Tang, Peiwen
    Wang, Hailong
    Peng, Juan
    Wang, Lei
    Su, Chanmin
    Ding, Songyuan
    [J]. ANALYTICAL CHEMISTRY, 2022, 94 (12) : 5041 - 5047
  • [7] Convolutional Neural Network Training with Dynamic Epoch Ordering
    Plana Rius, Ferran
    Angulo Bahon, Cecilio
    Casas, Marc
    Mirats Tur, Josep Maria
    [J]. ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2019, 319 : 105 - 114
  • [8] Speeding-up Neural Network Training Using Sentence and Frame Selection
    Scanzio, Stefano
    Laface, Pietro
    Gemello, Roberto
    Mana, Franco
    [J]. INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, VOLS 1-4, 2007, : 377 - +
  • [9] Power-Efficient Deep Neural Network Accelerator Minimizing Global Buffer Access without Data Transfer between Neighboring Multiplier-Accumulator Units
    Lee, Jeonghyeok
    Han, Sangwook
    Choi, Seungwon
    Choi, Jungwook
    [J]. ELECTRONICS, 2022, 11 (13)
  • [10] Speeding Up the Line-Scan Raman Imaging of Living Cells by Deep Convolutional Neural Network
    He, Hao
    Xu, Mengxi
    Zong, Cheng
    Zheng, Peng
    Luo, Lilan
    Wang, Lei
    Ren, Bin
    [J]. ANALYTICAL CHEMISTRY, 2019, 91 (11) : 7070 - 7077