Design of Power-Efficient Training Accelerator for Convolution Neural Networks

被引:6
|
作者
Hong, JiUn [1 ]
Arslan, Saad [2 ]
Lee, TaeGeon [1 ]
Kim, HyungWon [1 ]
机构
[1] Chungbuk Natl Univ, Dept Elect Engn, Chungdae Ro 1, Cheongju 28644, South Korea
[2] COMSATS Univ Islamabad, Dept Elect & Comp Engn, Pk Rd, Islamabad 45550, Pakistan
基金
新加坡国家研究基金会;
关键词
training accelerator; neural network; CNN; AI chip;
D O I
10.3390/electronics10070787
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To realize deep learning techniques, a type of deep neural network (DNN) called a convolutional neural networks (CNN) is among the most widely used models aimed at image recognition applications. However, there is growing demand for light-weight and low-power neural network accelerators, not only for inference but also for training process. In this paper, we propose a training accelerator that provides low power and compact chip size targeted for mobile and edge computing applications. It accelerates to achieve the real-time processing of both inference and training using concurrent floating-point data paths. The proposed accelerator can be externally controlled and employs resource sharing and an integrated convolution-pooling block to achieve low area and low energy consumption. We implemented the proposed training accelerator in an FPGA (Field Programmable Gate Array) and evaluated its training performance using an MNIST CNN example in comparison with a PC with GPU (Graphics Processing Unit). While both methods achieved a similar training accuracy of 95.1%, the proposed accelerator, when implemented in a silicon chip, reduced the energy consumption by 480 times compared to the counterpart. Additionally, when implemented on an FPGA, an energy reduction of over 4.5 times was achieved compared to the existing FPGA training accelerator for the MNIST dataset. Therefore, the proposed accelerator is more suitable for deployment in mobile/edge nodes compared to the existing software and hardware accelerators.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] A Power-efficient Accelerator for Convolutional Neural Networks
    Sun, Fan
    Wang, Chao
    Gong, Lei
    Xu, Chongchong
    Zhang, Yiwei
    Lu, Yuntao
    Li, Xi
    Zhou, Xuehai
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER), 2017, : 631 - 632
  • [2] Power-Efficient Accelerator Design for Neural Networks Using Computation Reuse
    Yasoubi, Ali
    Hojabr, Reza
    Modarressi, Mehdi
    [J]. IEEE COMPUTER ARCHITECTURE LETTERS, 2017, 16 (01) : 72 - 75
  • [3] DeltaRNN: A Power-efficient Recurrent Neural Network Accelerator
    Gao, Chang
    Neil, Daniel
    Ceolini, Enea
    Liu, Shih-Chii
    Delbruck, Tobi
    [J]. PROCEEDINGS OF THE 2018 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS (FPGA'18), 2018, : 21 - 30
  • [4] Design of Power-Efficient Approximate Multipliers for Approximate Artificial Neural Networks
    Mrazek, Vojtech
    Sarwar, Syed Shakib
    Sekanina, Lukas
    Vasicek, Zdenek
    Roy, Kaushik
    [J]. 2016 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2016,
  • [5] Work-in-Progress: A Power-Efficient and High Performance FPGA Accelerator for Convolutional Neural Networks
    Gong, Lei
    Wang, Chao
    Li, Xi
    Chen, Huaping
    Zhou, Xuehai
    [J]. 2017 INTERNATIONAL CONFERENCE ON HARDWARE/SOFTWARE CODESIGN AND SYSTEM SYNTHESIS (CODES+ISSS), 2017,
  • [6] Input-Splitting of Large Neural Networks for Power-Efficient Accelerator with Resistive Crossbar Memory Array
    Kim, Yulhwa
    Kim, Hyungjun
    Ahn, Daehyun
    Kim, Jae-Joon
    [J]. PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED '18), 2018, : 231 - 236
  • [7] Power-Efficient Double-Cyclic Low-Precision Training for Convolutional Neural Networks
    Kim, Sungrae
    Kim, Hyun
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 344 - 347
  • [8] Power-Efficient Implementation of Ternary Neural Networks in Edge Devices
    Molina, Miguel
    Mendez, Javier
    Morales, Diego P.
    Castillo, Encarnacion
    Lopez Vallejo, Marisa
    Pegalajar, Manuel
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (20): : 20111 - 20121
  • [9] Optoelectronic Implementation of Compact and Power-efficient Recurrent Neural Networks
    Ichikawa, Taisei
    Masuda, Yutaka
    Ishihara, Tohru
    Shinya, Akihiko
    Notomi, Masaya
    [J]. 2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 390 - 393
  • [10] Power-Efficient Deep Neural Networks with Noisy Memristor Implementation
    Dupraz, Elsa
    Varshney, Lav R.
    Leduc-Primeau, Francois
    [J]. 2021 IEEE INFORMATION THEORY WORKSHOP (ITW), 2021,