On the Intrinsic Structures of Spiking Neural Networks

被引:0
|
作者
Zhang, Shao-Qun [1 ,2 ]
Chen, Jia-Yi [1 ,2 ]
Wu, Jin-Hui [1 ,3 ]
Zhang, Gao [4 ]
Xiong, Huan [5 ]
Gu, Bin [5 ]
Zhou, Zhi-Hua [1 ,3 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Nanjing Univ, Sch Intelligent Sci & Technol, Nanjing, Peoples R China
[3] Nanjing Univ, Sch Artificial Intelligence, Nanjing, Peoples R China
[4] Jiangsu Second Normal Univ, Sch Math Sci, Nanjing, Peoples R China
[5] Mohamed bin Zayed Univ Artificial Intelligence, Dept Machine Learning, Abu Dhabi, U Arab Emirates
基金
美国国家科学基金会;
关键词
Spiking Neural Network; Intrinsic Structures; Integration Operation; Self-connection Architecture; Firing-Reset Mechanism; Stochastic Excitation; Rademacher Complexity; COMPUTATIONAL POWER; NEURONS; NOISE;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent years have emerged a surge of interest in spiking neural networks (SNNs). The performance of SNNs hinges not only on searching apposite architectures and connection weights, similar to conventional artificial neural networks, but also on the meticulous configuration of their intrinsic structures. However, there has been a dearth of comprehensive studies examining the impact of intrinsic structures; thus developers often feel challenging to apply a standardized configuration of SNNs across diverse datasets or tasks. This work delves deep into the intrinsic structures of SNNs. Initially, we draw two key conclusions: (1) the membrane time hyper -parameter is intimately linked to the eigenvalues of the integration operation, dictating the functional topology of spiking dynamics; (2) various hyper -parameters of the firing -reset mechanism govern the overall firing capacity of an SNN, mitigating the injection ratio or sampling density of input data. These findings elucidate why the efficacy of SNNs hinges heavily on the configuration of intrinsic structures and lead to a recommendation that enhancing the adaptability of these structures contributes to improving the overall performance and applicability of SNNs. Inspired by this recognition, we propose two feasible approaches to enhance SNN learning, involving developing self -connection architectures and stochastic spiking neurons to augment the adaptability of the integration operation and firing -reset mechanism, respectively. We theoretically prove that (1) both methods promote the expressive property for universal approximation, (2) the incorporation of self -connection architectures fosters ample solutions and structural stability for SNNs approximating adaptive dynamical systems, (3) the stochastic spiking neurons maintain generalization bounds with an exponential reduction in Rademacher complexity. Empirical experiments conducted on various real -world datasets affirm the effectiveness of our proposed methods.
引用
收藏
页数:74
相关论文
共 50 条
  • [1] Microscopic theory of intrinsic timescales in spiking neural networks
    van Meegen, Alexander
    van Albada, Sacha J.
    PHYSICAL REVIEW RESEARCH, 2021, 3 (04):
  • [2] Sampling complex topology structures for spiking neural networks
    Yan, Shen
    Meng, Qingyan
    Xiao, Mingqing
    Wang, Yisen
    Lin, Zhouchen
    Neural Networks, 2024, 172
  • [3] Sampling complex topology structures for spiking neural networks
    Yan, Shen
    Meng, Qingyan
    Xiao, Mingqing
    Wang, Yisen
    Lin, Zhouchen
    NEURAL NETWORKS, 2024, 172
  • [4] Event-Driven Intrinsic Plasticity for Spiking Convolutional Neural Networks
    Zhang, Anguo
    Li, Xiumin
    Gao, Yueming
    Niu, Yuzhen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (05) : 1986 - 1995
  • [5] SPIKING NEURAL NETWORKS
    Ghosh-Dastidar, Samanwoy
    Adeli, Hojjat
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2009, 19 (04) : 295 - 308
  • [6] Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity
    Cheng Ly
    Journal of Computational Neuroscience, 2015, 39 : 311 - 327
  • [7] Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity
    Ly, Cheng
    JOURNAL OF COMPUTATIONAL NEUROSCIENCE, 2015, 39 (03) : 311 - 327
  • [8] Memory Organization and Structures for On-Chip Learning in Spiking Neural Networks
    Schaefer, Clemens J. S.
    Joshi, Siddharth
    2020 IEEE 63RD INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2020, : 599 - 602
  • [9] Third Generation Neural Networks: Spiking Neural Networks
    Ghosh-Dastidar, Samanwoy
    Adeli, Hojjat
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, 2009, 61 : 167 - +
  • [10] Information-Theoretic Intrinsic Plasticity for Online Unsupervised Learning in Spiking Neural Networks
    Zhang, Wenrui
    Li, Peng
    FRONTIERS IN NEUROSCIENCE, 2019, 13