A Procedural Constructive Learning Mechanism with Deep Reinforcement Learning for Cognitive Agents

被引:0
|
作者
Rossi, Leonardo de Lellis [1 ,4 ]
Rohmer, Eric [1 ,4 ]
Costa, Paula Dornhofer Paro [1 ,4 ]
Colombini, Esther Luna [2 ,4 ]
Simoes, Alexandre da Silva [3 ,4 ]
Gudwin, Ricardo Ribeiro [1 ,4 ]
机构
[1] Univ Estadual Campinas, Fac Engn Elect & Comp FEEC, Unicamp, Campinas, Brazil
[2] Univ Estadual Campinas, Inst Comp IC, Unicamp, Campinas, Brazil
[3] Univ Estadual Paulista Unesp, Dept Engn Controle & Automacao DECA, Inst Ciencia & Tecnol Sorocaba ICTS, Campus Sorocaba, Sorocaba, SP, Brazil
[4] Univ Estadual Campinas, Hub Artificial Intelligence & Cognit Architectures, Campinas, Brazil
关键词
Cognitive architecture; Neural networks; Deep reinforcement learning; Developmental robotics; CONSCIOUSNESS; DESIDERATA; ROBOT;
D O I
10.1007/s10846-024-02064-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in AI and deep learning have created a growing demand for artificial agents capable of performing tasks within increasingly complex environments. To address the challenges associated with continuous learning constraints and knowledge capacity in this context, cognitive architectures inspired by human cognition have gained significance. This study contributes to existing research by introducing a cognitive-attentional system employing a constructive neural network-based learning approach for continuous acquisition of procedural knowledge. We replace an incremental tabular Reinforcement Learning algorithm with a constructive neural network deep reinforcement learning mechanism for continuous sensorimotor knowledge acquisition, thereby enhancing the overall learning capacity. The primary emphasis of this modification centers on optimizing memory utilization and reducing training time. Our study presents a learning strategy that amalgamates deep reinforcement learning with procedural learning, mirroring the incremental learning process observed in human sensorimotor development. This approach is embedded within the CONAIM cognitive-attentional architecture, leveraging the cognitive tools of CST. The proposed learning mechanism allows the model to dynamically create and modify elements in its procedural memory, facilitating the reuse of previously acquired functions and procedures. Additionally, it equips the model with the capability to combine learned elements to effectively adapt to complex scenarios. A constructive neural network was employed, initiating with an initial hidden layer comprising one neuron. However, it possesses the capacity to adapt its internal architecture in response to its performance in procedural and sensorimotor learning tasks, inserting new hidden layers or neurons. Experimentation conducted through simulations involving a humanoid robot demonstrates the successful resolution of tasks that were previously unsolved through incremental knowledge acquisition. Throughout the training phase, the constructive agent achieved a minimum of 40% greater rewards and executed 8% more actions when compared to other agents. In the subsequent testing phase, the constructive agent exhibited a 15% increase in the number of actions performed in contrast to its counterparts.
引用
收藏
页数:25
相关论文
共 50 条
  • [41] Deep Reinforcement Learning for Structural Model Updating Using Transfer Learning Mechanism
    Pang, Issac Kwok-Tai
    Gao, Yuqing
    Mosalam, Khalid M.
    COMPUTING IN CIVIL ENGINEERING 2023-VISUALIZATION, INFORMATION MODELING, AND SIMULATION, 2024, : 364 - 371
  • [42] A Distributed Cache Mechanism of HDFS to Improve Learning Performance for Deep Reinforcement Learning
    Gao, Yongqiang
    Deng, Shunyi
    Li, Zhenkun
    2022 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING, ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM, 2022, : 280 - 285
  • [43] An Incentive Mechanism Design for Efficient Edge Learning by Deep Reinforcement Learning Approach
    Zhan, Yufeng
    Zhang, Jiang
    IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2020, : 2489 - 2498
  • [44] From Reinforcement Learning to Deep Reinforcement Learning: An Overview
    Agostinelli, Forest
    Hocquet, Guillaume
    Singh, Sameer
    Baldi, Pierre
    BRAVERMAN READINGS IN MACHINE LEARNING: KEY IDEAS FROM INCEPTION TO CURRENT STATE, 2018, 11100 : 298 - 328
  • [45] Improving the learning process of deep reinforcement learning agents operating in collective heating environments
    Jacobs, Stef
    Ghane, Sara
    Houben, Pieter Jan
    Kabbara, Zakarya
    Huybrechts, Thomas
    Hellinckx, Peter
    Verhaert, Ivan
    APPLIED ENERGY, 2025, 384
  • [46] Enhancing HVAC control systems through transfer learning with deep reinforcement learning agents
    Kadamala, Kevlyn
    Chambers, Des
    Barrett, Enda
    SMART ENERGY, 2024, 13
  • [47] Learning Sparse Evidence-Driven Interpretation to Understand Deep Reinforcement Learning Agents
    Dao, Giang
    Huff, Wesley Houston
    Lee, Minwoo
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [48] Analysing deep reinforcement learning agents trained with domain randomisation
    Dai, Tianhong
    Arulkumaran, Kai
    Gerbert, Tamara
    Tukra, Samyakh
    Behbahani, Feryal
    Bharath, Anil Anthony
    NEUROCOMPUTING, 2022, 493 : 143 - 165
  • [49] GBDT Modeling of Deep Reinforcement Learning Agents Using Distillation
    Hatano, Toshiki
    Tsuneda, Toi
    Suzuki, Yuta
    Imade, Kuniyasu
    Shesimo, Kazuki
    Yamane, Satoshi
    2021 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS (ICM), 2021,
  • [50] Deep reinforcement learning to study spatial navigation, learning and memory in artificial and biological agents
    Edgar Bermudez-Contreras
    Biological Cybernetics, 2021, 115 : 131 - 134