A Procedural Constructive Learning Mechanism with Deep Reinforcement Learning for Cognitive Agents

被引:0
|
作者
Leonardo de Lellis Rossi
Eric Rohmer
Paula Dornhofer Paro Costa
Esther Luna Colombini
Alexandre da Silva Simões
Ricardo Ribeiro Gudwin
机构
[1] Faculdade de Engenharia Elétrica e de Computação (FEEC),Universidade Estadual de Campinas (Unicamp)
[2] Instituto de Computação (IC),Universidade Estadual de Campinas (Unicamp)
[3] Departamento de Engenharia de Controle e Automação (DECA),Universidade Estadual Paulista (Unesp), Instituto de Ciência e Tecnologia de Sorocaba (ICTS)
[4] Hub of Artificial Intelligence and Cognitive Architectures (H.IAAC),undefined
[5] Unicamp,undefined
来源
关键词
Cognitive architecture; Neural networks; Deep reinforcement learning; Developmental robotics;
D O I
暂无
中图分类号
学科分类号
摘要
Recent advancements in AI and deep learning have created a growing demand for artificial agents capable of performing tasks within increasingly complex environments. To address the challenges associated with continuous learning constraints and knowledge capacity in this context, cognitive architectures inspired by human cognition have gained significance. This study contributes to existing research by introducing a cognitive-attentional system employing a constructive neural network-based learning approach for continuous acquisition of procedural knowledge. We replace an incremental tabular Reinforcement Learning algorithm with a constructive neural network deep reinforcement learning mechanism for continuous sensorimotor knowledge acquisition, thereby enhancing the overall learning capacity. The primary emphasis of this modification centers on optimizing memory utilization and reducing training time. Our study presents a learning strategy that amalgamates deep reinforcement learning with procedural learning, mirroring the incremental learning process observed in human sensorimotor development. This approach is embedded within the CONAIM cognitive-attentional architecture, leveraging the cognitive tools of CST. The proposed learning mechanism allows the model to dynamically create and modify elements in its procedural memory, facilitating the reuse of previously acquired functions and procedures. Additionally, it equips the model with the capability to combine learned elements to effectively adapt to complex scenarios. A constructive neural network was employed, initiating with an initial hidden layer comprising one neuron. However, it possesses the capacity to adapt its internal architecture in response to its performance in procedural and sensorimotor learning tasks, inserting new hidden layers or neurons. Experimentation conducted through simulations involving a humanoid robot demonstrates the successful resolution of tasks that were previously unsolved through incremental knowledge acquisition. Throughout the training phase, the constructive agent achieved a minimum of 40% greater rewards and executed 8% more actions when compared to other agents. In the subsequent testing phase, the constructive agent exhibited a 15% increase in the number of actions performed in contrast to its counterparts.
引用
收藏
相关论文
共 50 条
  • [1] A Procedural Constructive Learning Mechanism with Deep Reinforcement Learning for Cognitive Agents
    Rossi, Leonardo de Lellis
    Rohmer, Eric
    Costa, Paula Dornhofer Paro
    Colombini, Esther Luna
    Simoes, Alexandre da Silva
    Gudwin, Ricardo Ribeiro
    [J]. JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2024, 110 (01)
  • [2] Toward a Psychology of Deep Reinforcement Learning Agents Using a Cognitive Architecture
    Mitsopoulos, Konstantinos
    Somers, Sterling
    Schooler, Joel
    Lebiere, Christian
    Pirolli, Peter
    Thomson, Robert
    [J]. TOPICS IN COGNITIVE SCIENCE, 2022, 14 (04) : 756 - 779
  • [3] Constructive reinforcement learning
    Hernandez-Orallo, J
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2000, 15 (03) : 241 - 264
  • [4] Learning key steps to attack deep reinforcement learning agents
    Yu, Chien-Min
    Chen, Ming-Hsin
    Lin, Hsuan-Tien
    [J]. MACHINE LEARNING, 2023, 112 (05) : 1499 - 1522
  • [5] Learning key steps to attack deep reinforcement learning agents
    Chien-Min Yu
    Ming-Hsin Chen
    Hsuan-Tien Lin
    [J]. Machine Learning, 2023, 112 : 1499 - 1522
  • [6] Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
    Das, Abhishek
    Kottur, Satwik
    Moura, Jose M. F.
    Lee, Stefan
    Batra, Dhruv
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2970 - 2979
  • [7] Interval timing in deep reinforcement learning agents
    Deverett, Ben
    Faulkner, Ryan
    Fortunato, Meire
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [8] Temporal encoding in deep reinforcement learning agents
    Dongyan Lin
    Ann Zixiang Huang
    Blake Aaron Richards
    [J]. Scientific Reports, 13
  • [9] Perspective Taking in Deep Reinforcement Learning Agents
    Labash, Aqeel
    Aru, Jaan
    Matiisen, Tambet
    Tampuu, Ardi
    Vicente, Raul
    [J]. FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2020, 14 (14)
  • [10] Temporal encoding in deep reinforcement learning agents
    Lin, Dongyan
    Huang, Ann Zixiang
    Richards, Blake Aaron
    [J]. SCIENTIFIC REPORTS, 2023, 13 (01)