Reinforcement Learning and Attractor Neural Network Models of Associative Learning

被引:2
|
作者
Hamid, Oussama H. [1 ]
Braun, Jochen [2 ]
机构
[1] Univ Nottingham, Sch Comp Sci, Nottingham, England
[2] Univ Magdeburg, Inst Cognit Biol, Magdeburg, Germany
来源
关键词
Attractor neural networks; Model-free and model-based reinforcement learning; Stability-plasticity dilemma; Reversal learning; LONG-TERM-MEMORY; EPISODIC MEMORY; DECISION-MAKING; NEURONAL CORRELATE; VICARIOUS TRIAL; WORKING-MEMORY; REWARD; SYSTEMS; PREDICTION; CONTEXT;
D O I
10.1007/978-3-030-16469-0_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite indisputable advances in reinforcement learning (RL) research, some cognitive and architectural challenges still remain. The primary source of challenges in the current conception of RL stems from the theory's way to define states. Whereas states under laboratory conditions are tractable (due to the Markov property), states in real-world RL are high-dimensional, continuous and partially observable. Hence, effective learning and generalization can be guaranteed if the subset of reward relevant dimensions were correctly identified for each state. Moreover, the computational discrepancy between model-free and model-based RL methods creates a stability-plasticity dilemma in terms of how to guide optimal decision-making control in case of interactive and competitive multiple systems, each of which implements different type of RL methods. By showing behavioral results of how human subjects flexibly define states in a reversal learning paradigm contrary to a simple RL model, we argue that these challenges can be met by infusing the RL framework as an algorithmic theory of human behavior with the strengths of the attractor framework at the level of neural implementation. Our position is supported by the hypothesis that 'attractor states' which are stable patterns of self-sustained and reverberating brain activity, are a manifestation of the collective dynamics of neuronal populations in the brain. With its capacity of pattern-completion along with the ability to link events in temporal order, an attractor network becomes relatively insensitive to noise allowing to account for sparse data which is characteristic to high-dimensional and continuous real-world RL.
引用
收藏
页码:327 / 349
页数:23
相关论文
共 50 条
  • [1] Exploring the associative learning capabilities of the segmented attractor network for lifelong learning
    Jones, Alexander
    Jha, Rashmi
    [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [2] ASSOCIATIVE SEARCH NETWORK - A REINFORCEMENT LEARNING ASSOCIATIVE MEMORY
    BARTO, AG
    SUTTON, RS
    BROUWER, PS
    [J]. BIOLOGICAL CYBERNETICS, 1981, 40 (03) : 201 - 211
  • [3] ATTRACTOR NEURAL NETWORKS AND BIOLOGICAL REALITY - ASSOCIATIVE MEMORY AND LEARNING
    AMIT, DJ
    [J]. INTELLIGENT AUTONOMOUS SYSTEMS 2, VOLS 1 AND 2, 1989, : 35 - 50
  • [4] Category learning in a recurrent neural network with reinforcement learning
    Zhang, Ying
    Pan, Xiaochuan
    Wang, Yihong
    [J]. FRONTIERS IN PSYCHIATRY, 2022, 13
  • [5] COMPUTATIONAL MODELS OF THE NEURAL BASES OF ASSOCIATIVE LEARNING
    GLUCK, MA
    DONEGAN, NH
    THOMPSON, RF
    [J]. BULLETIN OF THE PSYCHONOMIC SOCIETY, 1986, 24 (05) : 335 - 335
  • [6] Supervised Associative Learning in Spiking Neural Network
    Yusoff, Nooraini
    Gruening, Andre
    [J]. ARTIFICIAL NEURAL NETWORKS-ICANN 2010, PT I, 2010, 6352 : 224 - 229
  • [7] Neural Network Ensembles in Reinforcement Learning
    Stefan Faußer
    Friedhelm Schwenker
    [J]. Neural Processing Letters, 2015, 41 : 55 - 69
  • [8] Neural Network Ensembles in Reinforcement Learning
    Fausser, Stefan
    Schwenker, Friedhelm
    [J]. NEURAL PROCESSING LETTERS, 2015, 41 (01) : 55 - 69
  • [10] Control of associative chaotic neural networks using a reinforcement learning
    Sato, N
    Adachi, M
    Kotani, M
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2004, PT 1, 2004, 3173 : 395 - 400