On the Intrinsic Structures of Spiking Neural Networks

被引:0
|
作者
Zhang, Shao-Qun [1 ,2 ]
Chen, Jia-Yi [1 ,2 ]
Wu, Jin-Hui [1 ,3 ]
Zhang, Gao [4 ]
Xiong, Huan [5 ]
Gu, Bin [5 ]
Zhou, Zhi-Hua [1 ,3 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Nanjing Univ, Sch Intelligent Sci & Technol, Nanjing, Peoples R China
[3] Nanjing Univ, Sch Artificial Intelligence, Nanjing, Peoples R China
[4] Jiangsu Second Normal Univ, Sch Math Sci, Nanjing, Peoples R China
[5] Mohamed bin Zayed Univ Artificial Intelligence, Dept Machine Learning, Abu Dhabi, U Arab Emirates
基金
美国国家科学基金会;
关键词
Spiking Neural Network; Intrinsic Structures; Integration Operation; Self-connection Architecture; Firing-Reset Mechanism; Stochastic Excitation; Rademacher Complexity; COMPUTATIONAL POWER; NEURONS; NOISE;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent years have emerged a surge of interest in spiking neural networks (SNNs). The performance of SNNs hinges not only on searching apposite architectures and connection weights, similar to conventional artificial neural networks, but also on the meticulous configuration of their intrinsic structures. However, there has been a dearth of comprehensive studies examining the impact of intrinsic structures; thus developers often feel challenging to apply a standardized configuration of SNNs across diverse datasets or tasks. This work delves deep into the intrinsic structures of SNNs. Initially, we draw two key conclusions: (1) the membrane time hyper -parameter is intimately linked to the eigenvalues of the integration operation, dictating the functional topology of spiking dynamics; (2) various hyper -parameters of the firing -reset mechanism govern the overall firing capacity of an SNN, mitigating the injection ratio or sampling density of input data. These findings elucidate why the efficacy of SNNs hinges heavily on the configuration of intrinsic structures and lead to a recommendation that enhancing the adaptability of these structures contributes to improving the overall performance and applicability of SNNs. Inspired by this recognition, we propose two feasible approaches to enhance SNN learning, involving developing self -connection architectures and stochastic spiking neurons to augment the adaptability of the integration operation and firing -reset mechanism, respectively. We theoretically prove that (1) both methods promote the expressive property for universal approximation, (2) the incorporation of self -connection architectures fosters ample solutions and structural stability for SNNs approximating adaptive dynamical systems, (3) the stochastic spiking neurons maintain generalization bounds with an exponential reduction in Rademacher complexity. Empirical experiments conducted on various real -world datasets affirm the effectiveness of our proposed methods.
引用
收藏
页数:74
相关论文
共 50 条
  • [21] Spiking PointNet: Spiking Neural Networks for Point Clouds
    Ren, Dayong
    Ma, Zhe
    Chen, Yuanpei
    Peng, Weihang
    Liu, Xiaode
    Zhang, Yuhan
    Guo, Yufei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [22] Neural Architecture Search for Spiking Neural Networks
    Kim, Youngeun
    Li, Yuhang
    Park, Hyoungseob
    Venkatesha, Yeshwanth
    Panda, Priyadarshini
    COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 36 - 56
  • [23] Incremental Neural Synthesis for Spiking Neural Networks
    Huy Le Nguyen
    Chu, Dominique
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 649 - 656
  • [24] Integrating Non-spiking Interneurons in Spiking Neural Networks
    Strohmer, Beck
    Stagsted, Rasmus Karnoe
    Manoonpong, Poramate
    Larsen, Leon Bonde
    FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [25] Dataset Conversion for Spiking Neural Networks
    Sadovsky, Erik
    Jakubec, Maros
    Jarinova, Darina
    Jarina, Roman
    2023 33RD INTERNATIONAL CONFERENCE RADIOELEKTRONIKA, RADIOELEKTRONIKA, 2023,
  • [26] Topological Evolution of Spiking Neural Networks
    Slade, Sam
    Zhang, Li
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [27] Deep learning in spiking neural networks
    Tavanaei, Amirhossein
    Ghodrati, Masoud
    Kheradpisheh, Saeed Reza
    Masquelier, Timothee
    Maida, Anthony
    NEURAL NETWORKS, 2019, 111 : 47 - 63
  • [28] Theoretically Provable Spiking Neural Networks
    Zhang, Shao-Qun
    Zhou, Zhi-Hua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [29] Learning algorithm for spiking neural networks
    Amin, HH
    Fujii, RH
    ADVANCES IN NATURAL COMPUTATION, PT 1, PROCEEDINGS, 2005, 3610 : 456 - 465
  • [30] Response Theory of Spiking Neural Networks
    Myoung Won Cho
    M. Y. Choi
    Journal of the Korean Physical Society, 2020, 77 : 168 - 176