Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation

被引:4
|
作者
Golden, Ryan [1 ,2 ]
Delanois, Jean Erik [2 ,3 ]
Sanda, Pavel [4 ]
Bazhenov, Maxim [1 ,2 ]
机构
[1] Univ Calif San Diego, Neurosci Grad Program, La Jolla, CA 92093 USA
[2] Univ Calif San Diego, Dept Med, La Jolla, CA 92093 USA
[3] Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA 92093 USA
[4] Czech Acad Sci, Inst Comp Sci, Prague, Czech Republic
关键词
MEMORY CONSOLIDATION; CONNECTIONIST MODELS; FEEDBACK-CIRCUITS; PREFRONTAL CORTEX; REPLAY; REACTIVATION; FEEDFORWARD; PREDICTION; OSCILLATIONS; WAKEFULNESS;
D O I
10.1371/journal.pcbi.1010628
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting. In contrast, the brain learns continuously, and typically learns best when new training is interleaved with periods of sleep for memory consolidation. Here we used spiking network to study mechanisms behind catastrophic forgetting and the role of sleep in preventing it. The network could be trained to learn a complex foraging task but exhibited catastrophic forgetting when trained sequentially on different tasks. In synaptic weight space, new task training moved the synaptic weight configuration away from the manifold representing old task leading to forgetting. Interleaving new task training with periods of off-line reactivation, mimicking biological sleep, mitigated catastrophic forgetting by constraining the network synaptic weight state to the previously learned manifold, while allowing the weight configuration to converge towards the intersection of the manifolds representing old and new tasks. The study reveals a possible strategy of synaptic weights dynamics the brain applies during sleep to prevent forgetting and optimize learning. Author summary Artificial neural networks can achieve superhuman performance in many domains. Despite these advances, these networks fail in sequential learning; they achieve optimal performance on newer tasks at the expense of performance on previously learned tasks. Humans and animals on the other hand have a remarkable ability to learn continuously and incorporate new data into their corpus of existing knowledge. Sleep has been hypothesized to play an important role in memory and learning by enabling spontaneous reactivation of previously learned memory patterns. Here we use a spiking neural network model, simulating sensory processing and reinforcement learning in animal brain, to demonstrate that interleaving new task training with sleep-like activity optimizes the network's memory representation in synaptic weight space to prevent forgetting old memories. Sleep makes this possible by replaying old memory traces without the explicit usage of the old task data.
引用
收藏
页数:31
相关论文
共 50 条
  • [1] Sleep prevents catastrophic forgetting in spiking neural networks by forming joint synaptic weight representations
    Delanois, Jean E.
    Sanda, Pavel
    Bazhenov, Maxim
    Golden, Ryan
    [J]. JOURNAL OF COMPUTATIONAL NEUROSCIENCE, 2021, 49 (SUPPL 1) : S76 - S76
  • [2] Overcoming catastrophic forgetting in neural networks
    Kirkpatricka, James
    Pascanu, Razvan
    Rabinowitz, Neil
    Veness, Joel
    Desjardins, Guillaume
    Rusu, Andrei A.
    Milan, Kieran
    Quan, John
    Ramalho, Tiago
    Grabska-Barwinska, Agnieszka
    Hassabis, Demis
    Clopath, Claudia
    Kumaran, Dharshan
    Hadsell, Raia
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (13) : 3521 - 3526
  • [3] Measuring Catastrophic Forgetting in Neural Networks
    Kemker, Ronald
    McClure, Marc
    Abitino, Angelina
    Hayes, Tyler L.
    Kanan, Christopher
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3390 - 3398
  • [4] Overcoming Catastrophic Forgetting in Graph Neural Networks
    Liu, Huihui
    Yang, Yiding
    Wang, Xinchao
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8653 - 8661
  • [5] Sleep-like unsupervised replay reduces catastrophic forgetting in artificial neural networks
    Tadros, Timothy
    Krishnan, Giri P.
    Ramyaa, Ramyaa
    Bazhenov, Maxim
    [J]. NATURE COMMUNICATIONS, 2022, 13 (01)
  • [6] Biologically Inspired Sleep Algorithm for Reducing Catastrophic Forgetting in Neural Networks (Student Abstract)
    Tadros, Timothy
    Krishnan, Giri
    Ramyaa, Ramyaa
    Bazhenov, Maxim
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13933 - 13934
  • [7] Sleep-like unsupervised replay reduces catastrophic forgetting in artificial neural networks
    Timothy Tadros
    Giri P. Krishnan
    Ramyaa Ramyaa
    Maxim Bazhenov
    [J]. Nature Communications, 13
  • [8] Supervised learning in spiking neural networks with synaptic delay-weight plasticity
    Zhang, Malu
    Wu, Jibin
    Belatreche, Ammar
    Pan, Zihan
    Xie, Xiurui
    Chua, Yansong
    Li, Guoqi
    Qu, Hong
    Li, Haizhou
    [J]. NEUROCOMPUTING, 2020, 409 : 103 - 118
  • [9] A brain-inspired algorithm that mitigates catastrophic forgetting of artificial and spiking neural networks with low computational cost
    Zhang, Tielin
    Cheng, Xiang
    Jia, Shuncheng
    Li, Chengyu T.
    Poo, Mu-ming
    Xu, Bo
    [J]. SCIENCE ADVANCES, 2023, 9 (34)
  • [10] Unsupervised Learning to Overcome Catastrophic Forgetting in Neural Networks
    Munoz-Martin, Irene
    Bianchi, Stefano
    Pedretti, Giacomo
    Melnic, Octavian
    Ambrogio, Stefano
    Ielmini, Daniele
    [J]. IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS, 2019, 5 (01): : 58 - 66