An efficient automated parameter tuning framework for spiking neural networks

被引:48
|
作者
Carlson, Kristofor D. [1 ]
Nageswaran, Jayram Moorkanikara [2 ]
Dutt, Nikil [3 ]
Krichmar, Jeffrey L. [1 ,3 ]
机构
[1] Univ Calif Irvine, Dept Cognit Sci, Irvine, CA 92697 USA
[2] Brain Corp, San Diego, CA USA
[3] Univ Calif Irvine, Dept Comp Sci, Irvine, CA 92697 USA
关键词
spiking neural networks; parameter tuning; evolutionary algorithms; GPU programming; self-organizing receptive fields; STDP; LARGE-SCALE MODEL; SYNAPTIC PLASTICITY; CEREBELLUM; SIMULATION; EVOLUTION;
D O I
10.3389/fnins.2014.00010
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65x of the GPU implementation over the CPU implementation, or 0.35h per generation for GPU vs. 23.5h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Towards Efficient and Stable Time Parameter Optimization in Spiking Neural Networks
    Huang, Jie
    Liu, Chengzhi
    Li, Longyue
    Liu, Xu
    Xia, Na
    [J]. ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14865 : 293 - 300
  • [2] Automated Parameter Tuning of Artificial Neural Networks for Software Defect Prediction
    Yang, Zhao
    Qian, Hongbing
    [J]. ICAIP 2018: 2018 THE 2ND INTERNATIONAL CONFERENCE ON ADVANCES IN IMAGE PROCESSING, 2018, : 203 - 209
  • [3] SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks
    Liu, Fangxin
    Zhao, Wenbo
    Chen, Yongbiao
    Wang, Zongwu
    Jiang, Li
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 1692 - 1701
  • [4] Efficient learning in spiking neural networks
    Rast, Alexander
    Aoun, Mario Antoine
    Elia, Eleni G.
    Crook, Nigel
    [J]. NEUROCOMPUTING, 2024, 597
  • [5] FSpiNN: An Optimization Framework for Memory-Efficient and Energy-Efficient Spiking Neural Networks
    Putra, Rachmad Vidya Wicaksana
    Shafique, Muhammad
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (11) : 3601 - 3613
  • [6] Quantization Framework for Fast Spiking Neural Networks
    Li, Chen
    Ma, Lei
    Furber, Steve
    [J]. FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [7] A Learning Framework for Controlling Spiking Neural Networks
    Narayanan, Vignesh
    Ritt, Jason T.
    Li, Jr-Shin
    Ching, ShiNung
    [J]. 2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 211 - 216
  • [8] Efficient Spiking Neural Networks With Radix Encoding
    Wang, Zhehui
    Gu, Xiaozhe
    Goh, Rick Siow Mong
    Zhou, Joey Tianyi
    Luo, Tao
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (03) : 3689 - 3701
  • [9] Identifying Efficient Dataflows for Spiking Neural Networks
    Sharma, Deepika
    Ankit, Aayush
    Roy, Kaushik
    [J]. 2022 ACM/IEEE INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED 2022, 2022,
  • [10] Approaches to efficient simulation with spiking neural networks
    Connolly, CG
    Marian, I
    Reilly, RG
    [J]. CONNECTIONIST MODELS OF COGNITION AND PERCEPTION II, 2004, 15 : 231 - 240