A Low-Power Hardware Architecture for On-Line Supervised Learning in Multi-Layer Spiking Neural Networks

被引:17
|
作者
Zheng, Nan [1 ]
Mazumder, Pinaki [1 ]
机构
[1] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
Hardware neural network; supervised learning; machine learning; neuromorphic computing; spiking neural network; spike-timing-dependent plasticity;
D O I
10.1109/ISCAS.2018.8351516
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we propose an event-triggered hardware architecture for spiking neural networks with a weight-dependent spike-timing-dependent plasticity (STDP) learning algorithm. Several algorithm adaptations are made on the original learning algorithm in order to reduce the hardware complexity and to improve the energy efficiency of the hardware. In addition, an algorithm-hardware co-design approach is employed to boost the performance. Through leveraging the sparsity of spike trains and local storage units in the network, both the memory requirement of the algorithm and the clock cycles needed per learning iteration are significantly reduced. The proposed hardware architecture is implemented in a 65-nm technology. A three-layer neural network with a configuration of 256-50-10 is demonstrated. The designed chip can conduct inference on a down-sampled MNIST dataset with an energy consumption of 1.12 mu J/ inference while achieving a recognition rate above 90%.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] A Scalable Hardware Architecture for Multi-Layer Spiking Neural Networks
    Ying, Zhaozhong
    Luo, Chong
    Zhu, Xiaolei
    [J]. 2017 IEEE 12TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2017, : 839 - 842
  • [2] Globally optimal on-line learning rules for multi-layer neural networks
    Rattray, M
    Saad, D
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1997, 30 (22): : L771 - L776
  • [3] Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware
    Diehl, Peter U.
    Zarrella, Guido
    Cassidy, Andrew
    Pedroni, Bruno U.
    Neftci, Emre
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC), 2016,
  • [4] A Customized Hardware Architecture for Multi-layer Artificial Neural Networks on FPGA
    Huynh Minh Vu
    Huynh Viet Thang
    [J]. INFORMATION SYSTEMS DESIGN AND INTELLIGENT APPLICATIONS, INDIA 2017, 2018, 672 : 637 - 644
  • [5] An Adaptive Structure Learning Algorithm for Multi-Layer Spiking Neural Networks
    Wu, Doudou
    Lin, Xianghong
    Du, Pangao
    [J]. 2019 15TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS 2019), 2019, : 98 - 102
  • [6] Spiking Inception Module for Multi-layer Unsupervised Spiking Neural Networks
    Meng, Mingyuan
    Yang, Xingyu
    Xiao, Shanlin
    Yu, Zhiyi
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [7] Low Cost Interconnected Architecture for the Hardware Spiking Neural Networks
    Luo, Yuling
    Wan, Lei
    Liu, Junxiu
    Harkin, Jim
    McDaid, Liam
    Cao, Yi
    Ding, Xuemei
    [J]. FRONTIERS IN NEUROSCIENCE, 2018, 12
  • [8] Multi-layer corrective cascade architecture for on-line predictive echo state networks
    Webb, Russell Y.
    [J]. APPLIED ARTIFICIAL INTELLIGENCE, 2008, 22 (7-8) : 811 - 823
  • [9] FEDERATED NEUROMORPHIC LEARNING OF SPIKING NEURAL NETWORKS FOR LOW-POWER EDGE INTELLIGENCE
    Skatchkovsky, Nicolas
    Fang, Hyeryung
    Simeone, Osvaldo
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8524 - 8528
  • [10] DeepTempo: A Hardware-Friendly Direct Feedback Alignment Multi-Layer Tempotron Learning Rule for Deep Spiking Neural Networks
    Shi, Cong
    Wang, Tengxiao
    He, Junxian
    Zhang, Jianghao
    Liu, Liyuan
    Wu, Nanjian
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2021, 68 (05) : 1581 - 1585