STATISTICAL DYNAMICS OF LEARNING PROCESSES IN SPIKING NEURAL NETWORKS

被引:0
|
作者
Hyland, David C. [1 ]
机构
[1] Texas A&M Univ, College Stn, TX 77843 USA
来源
关键词
D O I
暂无
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
In previous work, the author and Dr. Jer-Nan Juang contributed a new neural net architecture, within the framework of "second generation" neural models. We showed how to implement backpropagation learning in a massively parallel architecture involving only local computations - thereby capturing one of the principal advantages of biological neural nets. Since then, a large body of neural-biological research has given rise to the "third-generation" models, namely spiking neural nets, wherein the brief, sharp pulses (spikes) produced by neurons are explicitly modeled. Information is encoded not in average firing rates, but in the temporal pattern of the spikes. Further, no physiological basis for backpropagation has been found, rather, synaptic plasticity is driven by the timing of spikes. The present paper examines the statistical dynamics of learning processes in spiking neural nets. Equations describing the evolution of synaptic efficacies and the probability distributions of the neural states are derived. Although the system is strongly nonlinear, the typically large number of synapses per neuron (similar to 10,000) permits us to obtain a closed system of equations. As in the earlier work, we see that the learning process in this more realistic setting is dominated by local interactions; thereby preserving massive parallelism. It is hoped that the formulation given here will provide the basis for the rigorous analysis of learning dynamics in very large neural nets (10(10) neurons in the human brain!) for which direct simulation is difficult or impractical.
引用
下载
收藏
页码:363 / 378
页数:16
相关论文
共 50 条
  • [1] Nonlinear dynamics and machine learning of recurrent spiking neural networks
    Maslennikov, O. V.
    Pugavko, M. M.
    Shchapin, D. S.
    Nekorkin, V. I.
    PHYSICS-USPEKHI, 2022, 65 (10) : 1020 - 1038
  • [2] Learning by stimulation avoidance: A principle to control spiking neural networks dynamics
    Sinapayen, Lana
    Masumori, Atsushi
    Ikegami, Takashi
    PLOS ONE, 2017, 12 (02):
  • [3] Spiking Neural Networks with Improved Inherent Recurrence Dynamics for Sequential Learning
    Ponghiran, Wachirawit
    Roy, Kaushik
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8001 - 8008
  • [4] Learning by Stimulation Avoidance as a Primary Principle of Spiking Neural Networks Dynamics
    Sinapayen, Lana
    Masumori, Atsushi
    Virgo, Nathaniel
    Ikegami, Takashi
    ECAL 2015: THE THIRTEENTH EUROPEAN CONFERENCE ON ARTIFICIAL LIFE, 2015, : 175 - 182
  • [5] Designing the dynamics of spiking neural networks
    Memmesheimer, Raoul-Martin
    Timme, Marc
    PHYSICAL REVIEW LETTERS, 2006, 97 (18)
  • [6] Learning algorithm for spiking neural networks
    Amin, HH
    Fujii, RH
    ADVANCES IN NATURAL COMPUTATION, PT 1, PROCEEDINGS, 2005, 3610 : 456 - 465
  • [7] Deep learning in spiking neural networks
    Tavanaei, Amirhossein
    Ghodrati, Masoud
    Kheradpisheh, Saeed Reza
    Masquelier, Timothee
    Maida, Anthony
    NEURAL NETWORKS, 2019, 111 : 47 - 63
  • [8] Supervised learning with spiking neural networks
    Xin, JG
    Embrechts, MJ
    IJCNN'01: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2001, : 1772 - 1777
  • [9] Federated Learning With Spiking Neural Networks
    Venkatesha, Yeshwanth
    Kim, Youngeun
    Tassiulas, Leandros
    Panda, Priyadarshini
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 6183 - 6194
  • [10] Efficient learning in spiking neural networks
    Rast, Alexander
    Aoun, Mario Antoine
    Elia, Eleni G.
    Crook, Nigel
    NEUROCOMPUTING, 2024, 597