Metaplasticity and memory in multilevel recurrent feed-forward networks

被引:0
|
作者
Zanardi, Gianmarco [1 ,2 ]
Bettotti, Paolo [1 ]
Morand, Jules [1 ,2 ]
Pavesi, Lorenzo [1 ]
Tubiana, Luca [1 ,2 ]
机构
[1] Univ Trento, Phys Dept, Via Sommar 14, I-38123 Trento, Italy
[2] Trento Inst Fundamental Phys & Applicat, INFN TIFPA, I-38123 Trento, Italy
基金
欧洲研究理事会;
关键词
MODELS;
D O I
10.1103/PhysRevE.110.054304
中图分类号
O35 [流体力学]; O53 [等离子体物理学];
学科分类号
070204 ; 080103 ; 080704 ;
摘要
Network systems can exhibit memory effects in which the interactions between different pairs of nodes adapt in time, leading to the emergence of preferred connections, patterns, and subnetworks. To a first approximation, this memory can be modeled through a "plastic" Hebbian or homophily mechanism, in which edges get reinforced proportionally to the amount of information flowing through them. However, recent studies on glia-neuron networks have highlighted how memory can evolve due to more complex dynamics, including multilevel network structures and "metaplastic" effects that modulate reinforcement. Inspired by those systems, here we develop a simple and general model for the dynamics of an adaptive network with an additional metaplastic mechanism that varies the rate of Hebbian strengthening of its edge connections. The metaplastic term acts on a second network level in which edges are grouped together, simulating local, longer timescale effects. Specifically, we consider a biased random walk on a cyclic feed-forward network. The random walk chooses its steps according to the weights of the network edges. The weights evolve through a Hebbian mechanism modulated by a metaplastic reinforcement, biasing the walker to prefer edges that have been already explored. We study the dynamical emergence (memorization) of preferred paths and their retrieval and identify three regimes: one dominated by the Hebbian term, one in which the metareinforcement drives memory formation, and a balanced one. We show that, in the latter two regimes, metareinforcement allows the retrieval of a previously stored path even after the weights have been reset to zero to erase Hebbian memory.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] SYNTHESIS OF MULTILEVEL FEED-FORWARD MOS NETWORKS
    LIU, TK
    IEEE TRANSACTIONS ON COMPUTERS, 1977, 26 (06) : 581 - 588
  • [2] Feed-forward and recurrent neural networks in signal prediction
    Prochazka, Ales
    Pavelka, Ales
    ICCC 2007: 5TH IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL CYBERNETICS, PROCEEDINGS, 2007, : 93 - 96
  • [3] A greenhouse control with feed-forward and recurrent neural networks
    Fourati, Fathi
    Chtourou, Mohamed
    SIMULATION MODELLING PRACTICE AND THEORY, 2007, 15 (08) : 1016 - 1028
  • [4] In situ training of feed-forward and recurrent convolutional memristor networks
    Wang, Zhongrui
    Li, Can
    Lin, Peng
    Rao, Mingyi
    Nie, Yongyang
    Song, Wenhao
    Qiu, Qinru
    Li, Yunning
    Yan, Peng
    Strachan, John Paul
    Ge, Ning
    McDonald, Nathan
    Wu, Qing
    Hu, Miao
    Wu, Huaqiang
    Williams, R. Stanley
    Xia, Qiangfei
    Yang, J. Joshua
    NATURE MACHINE INTELLIGENCE, 2019, 1 (09) : 434 - 442
  • [5] In situ training of feed-forward and recurrent convolutional memristor networks
    Zhongrui Wang
    Can Li
    Peng Lin
    Mingyi Rao
    Yongyang Nie
    Wenhao Song
    Qinru Qiu
    Yunning Li
    Peng Yan
    John Paul Strachan
    Ning Ge
    Nathan McDonald
    Qing Wu
    Miao Hu
    Huaqiang Wu
    R. Stanley Williams
    Qiangfei Xia
    J. Joshua Yang
    Nature Machine Intelligence, 2019, 1 : 434 - 442
  • [6] Evolutionary approach to training feed-forward and recurrent neural networks
    Riley, Jeff
    Ciesielski, Victor B.
    International Conference on Knowledge-Based Intelligent Electronic Systems, Proceedings, KES, 1998, 3 : 596 - 602
  • [7] DEEP FEED-FORWARD SEQUENTIAL MEMORY NETWORKS FOR SPEECH SYNTHESIS
    Bi, Mengxiao
    Lu, Heng
    Zhang, Shiliang
    Lei, Ming
    Yan, Zhijie
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 4794 - 4798
  • [8] Feed-forward neural networks
    Bebis, George
    Georgiopoulos, Michael
    IEEE Potentials, 1994, 13 (04): : 27 - 31
  • [9] Limits to the development of feed-forward structures in large recurrent neuronal networks
    Kunkel, Susanne
    Diesmann, Markus
    Morrison, Abigail
    Frontiers in Computational Neuroscience, 2010, 4
  • [10] An evolutionary approach to training feed-forward and recurrent neural networks.
    Riley, J
    Ciesielski, VB
    1998 SECOND INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, KES '98, PROCEEDINGS, VOL, 3, 1998, : 596 - 602