Metaplasticity and memory in multilevel recurrent feed-forward networks

被引:0
|
作者
Zanardi, Gianmarco [1 ,2 ]
Bettotti, Paolo [1 ]
Morand, Jules [1 ,2 ]
Pavesi, Lorenzo [1 ]
Tubiana, Luca [1 ,2 ]
机构
[1] Univ Trento, Phys Dept, Via Sommar 14, I-38123 Trento, Italy
[2] Trento Inst Fundamental Phys & Applicat, INFN TIFPA, I-38123 Trento, Italy
基金
欧洲研究理事会;
关键词
MODELS;
D O I
10.1103/PhysRevE.110.054304
中图分类号
O35 [流体力学]; O53 [等离子体物理学];
学科分类号
070204 ; 080103 ; 080704 ;
摘要
Network systems can exhibit memory effects in which the interactions between different pairs of nodes adapt in time, leading to the emergence of preferred connections, patterns, and subnetworks. To a first approximation, this memory can be modeled through a "plastic" Hebbian or homophily mechanism, in which edges get reinforced proportionally to the amount of information flowing through them. However, recent studies on glia-neuron networks have highlighted how memory can evolve due to more complex dynamics, including multilevel network structures and "metaplastic" effects that modulate reinforcement. Inspired by those systems, here we develop a simple and general model for the dynamics of an adaptive network with an additional metaplastic mechanism that varies the rate of Hebbian strengthening of its edge connections. The metaplastic term acts on a second network level in which edges are grouped together, simulating local, longer timescale effects. Specifically, we consider a biased random walk on a cyclic feed-forward network. The random walk chooses its steps according to the weights of the network edges. The weights evolve through a Hebbian mechanism modulated by a metaplastic reinforcement, biasing the walker to prefer edges that have been already explored. We study the dynamical emergence (memorization) of preferred paths and their retrieval and identify three regimes: one dominated by the Hebbian term, one in which the metareinforcement drives memory formation, and a balanced one. We show that, in the latter two regimes, metareinforcement allows the retrieval of a previously stored path even after the weights have been reset to zero to erase Hebbian memory.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Propagating synchrony in feed-forward networks
    Jahnke, Sven
    Memmesheimer, Raoul-Martin
    Timme, Marc
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2013, 7
  • [22] Online Structure Learning for Feed-Forward and Recurrent Sum-Product Networks
    Kalra, Agastya
    Rashwan, Abdullah
    Hsu, Wilson
    Poupart, Pascal
    Doshi, Prashant
    Trimponias, George
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [23] Feed-forward and recurrent neural networks for source code informal information analysis
    Merlo, E
    McAdam, I
    De Mori, R
    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION-RESEARCH AND PRACTICE, 2003, 15 (04): : 205 - 244
  • [24] Resistive switching synapses for unsupervised learning in feed-forward and recurrent neural networks
    Milo, V.
    Pedretti, G.
    Laudato, M.
    Bricalli, A.
    Ambrosi, E.
    Bianchi, S.
    Chicca, E.
    Ielmini, D.
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [25] Applicability of feed-forward and recurrent neural networks to Boolean function complexity modeling
    Beg, Azam
    Prasad, P. W. Chandana
    Beg, Ajmal
    EXPERT SYSTEMS WITH APPLICATIONS, 2008, 34 (04) : 2436 - 2443
  • [26] Predicting Memory Compiler Performance Outputs Using Feed-forward Neural Networks
    Last, Felix
    Haeberlein, Max
    Schlichtmann, Ulf
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2020, 25 (05)
  • [27] Feed-forward Neural Networks with Trainable Delay
    Ji, Xunbi A.
    Molnar, Tamas G.
    Avedisov, Sergei S.
    Orosz, Gabor
    LEARNING FOR DYNAMICS AND CONTROL, VOL 120, 2020, 120 : 127 - 136
  • [28] On lateral connections in feed-forward neural networks
    Kothari, R
    Agyepong, K
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 13 - 18
  • [29] Exact representations from feed-forward networks
    Melnik, O
    Pollack, J
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL III, 2000, : 459 - 464
  • [30] Optimizing dense feed-forward neural networks
    Balderas, Luis
    Lastra, Miguel
    Benitez, Jose M.
    NEURAL NETWORKS, 2024, 171 : 229 - 241