Learning in neural networks by reinforcement of irregular spiking

被引:85
|
作者
Xie, XH
Seung, HS
机构
[1] MIT, Dept Brain & Cognit Sci, Cambridge, MA 02139 USA
[2] MIT, Howard Hughes Med Inst, Cambridge, MA 02139 USA
来源
PHYSICAL REVIEW E | 2004年 / 69卷 / 04期
关键词
D O I
10.1103/PhysRevE.69.041909
中图分类号
O35 [流体力学]; O53 [等离子体物理学];
学科分类号
070204 ; 080103 ; 080704 ;
摘要
Artificial neural networks are often trained by using the back propagation algorithm to compute the gradient of an objective function with respect to the synaptic strengths. For a biological neural network, such a gradient computation would be difficult to implement, because of the complex dynamics of intrinsic and synaptic conductances in neurons. Here we show that irregular spiking similar to that observed in biological neurons could be used as the basis for a learning rule that calculates a stochastic approximation to the gradient. The learning rule is derived based on a special class of model networks in which neurons fire spike trains with Poisson statistics. The learning is compatible with forms of synaptic dynamics such as short-term facilitation and depression. By correlating the fluctuations in irregular spiking with a reward signal, the learning rule performs stochastic gradient ascent on the expected reward. It is applied to two examples, learning the XOR computation and learning direction selectivity using depressing synapses. We also show in simulation that the learning rule is applicable to a network of noisy integrate-and-fire neurons.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] A reinforcement learning algorithm for spiking neural networks
    Florian, RV
    [J]. Seventh International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, Proceedings, 2005, : 299 - 306
  • [2] Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses
    Yuan, Mengwen
    Wu, Xi
    Yan, Rui
    Tang, Huajin
    [J]. NEURAL COMPUTATION, 2019, 31 (12) : 2368 - 2389
  • [3] Learning in spiking neural networks by reinforcement of stochastic synaptic transmission
    Seung, HS
    [J]. NEURON, 2003, 40 (06) : 1063 - 1073
  • [4] Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks
    Weidel, Philipp
    Duarte, Renato
    Morrison, Abigail
    [J]. FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2021, 15
  • [5] Reinforcement Learning in Memristive Spiking Neural Networks through Modulation of ReSuMe
    Ji, Xun
    Zhang, Yaozhong
    Li, Chuxi
    Wu, Tanghong
    Hu, Xiaofang
    [J]. ADVANCES IN MATERIALS, MACHINERY, ELECTRONICS III, 2019, 2073
  • [6] Soft-Reward Based Reinforcement Learning by Spiking Neural Networks
    Shi, Weiya
    [J]. ADVANCED RESEARCH ON INFORMATION SCIENCE, AUTOMATION AND MATERIAL SYSTEM, PTS 1-6, 2011, 219-220 : 770 - 773
  • [7] BrainQN: Enhancing the Robustness of Deep Reinforcement Learning with Spiking Neural Networks
    Feng, Shuo
    Cao, Jian
    Ou, Zehong
    Chen, Guang
    Zhong, Yi
    Wang, Zilin
    Yan, Juntong
    Chen, Jue
    Wang, Bingsen
    Zou, Chenglong
    Feng, Zebang
    Wang, Yuan
    [J]. ADVANCED INTELLIGENT SYSTEMS, 2024, 6 (09)
  • [8] On computational models of theory of mind and the imitative reinforcement learning in spiking neural networks
    Ashena Gorgan Mohammadi
    Mohammad Ganjtabesh
    [J]. Scientific Reports, 14
  • [9] Analog synaptic devices applied to spiking neural networks for reinforcement learning applications
    Kim, Jangsaeng
    Lee, Soochang
    Kim, Chul-Heung
    Park, Byung-Gook
    Lee, Jong-Ho
    [J]. SEMICONDUCTOR SCIENCE AND TECHNOLOGY, 2022, 37 (07)
  • [10] Spiking Neural Networks with Different Reinforcement Learning (RL) Schemes in a Multiagent Setting
    Christodoulou, Chris
    Cleanthous, Aristodemos
    [J]. CHINESE JOURNAL OF PHYSIOLOGY, 2010, 53 (06): : 447 - 453