The hippocampal formation as a hierarchical generative model supporting generative replay and continual learning

被引:22
|
作者
Stoianov, Ivilin [1 ]
Maisto, Domenico [1 ]
Pezzulo, Giovanni [1 ,2 ]
机构
[1] CNR, Inst Cognit Sci & Technol, Rome, Italy
[2] CNR, Inst Cognit Sci & Technol, Via S Martino Battaglia, Rome, Italy
基金
欧洲研究理事会;
关键词
Hippocampus; Generative model; Generative replay; Cognitive map; Sequence generation; continual learning; PLACE CELLS; COGNITIVE MAP; ACTIVE INFERENCE; EPISODIC MEMORY; REVERSE REPLAY; SEQUENCES; CONTEXT; FUTURE; DECISION; SYSTEMS;
D O I
10.1016/j.pneurobio.2022.102329
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
We advance a novel computational theory of the hippocampal formation as a hierarchical generative model that organizes sequential experiences, such as rodent trajectories during spatial navigation, into coherent spatio-temporal contexts. We propose that the hippocampal generative model is endowed with inductive biases to identify individual items of experience (first hierarchical layer), organize them into sequences (second layer) and cluster them into maps (third layer). This theory entails a novel characterization of hippocampal reactivations as generative replay: the offline resampling of fictive sequences from the generative model, which supports the continual learning of multiple sequential experiences. We show that the model learns and efficiently retains multiple spatial navigation trajectories, by organizing them into spatial maps. Furthermore, the model reproduces flexible and prospective aspects of hippocampal dynamics that are challenging to explain within existing frameworks. This theory reconciles multiple roles of the hippocampal formation in map-based navigation, episodic memory and imagination.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] ILETC: Incremental learning for encrypted traffic classification using generative replay and exemplar
    Zhu, Wenbin
    Ma, Xiuli
    Jin, Yanliang
    Wang, Rui
    COMPUTER NETWORKS, 2023, 224
  • [42] Learning a Generative Model for Structural Representations
    Torsello, Andrea
    Dowe, David L.
    AI 2008: ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2008, 5360 : 573 - +
  • [43] A generative learning model for saccade adaptation
    Cassanello, Carlos R.
    Ostendorf, Florian
    Rolfs, Martin
    PLOS COMPUTATIONAL BIOLOGY, 2019, 15 (08)
  • [44] Evaluating and Explaining Generative Adversarial Networks for Continual Learning under Concept Drift
    Guzy, Filip
    Wozniak, Michal
    Krawczyk, Bartosz
    21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS ICDMW 2021, 2021, : 295 - 303
  • [45] Hierarchical Gaussian Mixture based Task Generative Model for Robust Meta-Learning
    Zhang, Yizhou
    Ni, Jingchao
    Cheng, Wei
    Chen, Zhengzhang
    Tong, Liang
    Chen, Haifeng
    Liu, Yan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [46] A HIERARCHICAL GENERATIVE MODEL FOR GENERIC AUDIO DOCUMENT CATEGORIZATION
    Zeng, Zhi
    Zhang, Shuwu
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 405 - 408
  • [47] Semantics-Driven Generative Replay for Few-Shot Class Incremental Learning
    Agarwal, Aishwarya
    Banerjee, Biplab
    Cuzzolin, Fabio
    Chaudhuri, Subhasis
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5246 - 5254
  • [48] A Computational Model for Latent Learning based on Hippocampal Replay
    Scleidorovich, Pablo
    Llofriu, Martin
    Fellous, Jean-Marc
    Weitzenfeld, Alfredo
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [49] A robotic model of hippocampal reverse replay for reinforcement learning
    Whelan, Matthew T.
    Jimenez-Rodriguez, Alejandro
    Prescott, Tony J.
    Vasilaki, Eleni
    BIOINSPIRATION & BIOMIMETICS, 2023, 18 (01)
  • [50] UIFGAN: An unsupervised continual-learning generative adversarial network for unified image fusion
    Le, Zhuliang
    Huang, Jun
    Xu, Han
    Fan, Fan
    Ma, Yong
    Mei, Xiaoguang
    Ma, Jiayi
    INFORMATION FUSION, 2022, 88 : 305 - 318