A Procedural Constructive Learning Mechanism with Deep Reinforcement Learning for Cognitive Agents

被引:0
|
作者
Rossi, Leonardo de Lellis [1 ,4 ]
Rohmer, Eric [1 ,4 ]
Costa, Paula Dornhofer Paro [1 ,4 ]
Colombini, Esther Luna [2 ,4 ]
Simoes, Alexandre da Silva [3 ,4 ]
Gudwin, Ricardo Ribeiro [1 ,4 ]
机构
[1] Univ Estadual Campinas, Fac Engn Elect & Comp FEEC, Unicamp, Campinas, Brazil
[2] Univ Estadual Campinas, Inst Comp IC, Unicamp, Campinas, Brazil
[3] Univ Estadual Paulista Unesp, Dept Engn Controle & Automacao DECA, Inst Ciencia & Tecnol Sorocaba ICTS, Campus Sorocaba, Sorocaba, SP, Brazil
[4] Univ Estadual Campinas, Hub Artificial Intelligence & Cognit Architectures, Campinas, Brazil
关键词
Cognitive architecture; Neural networks; Deep reinforcement learning; Developmental robotics; CONSCIOUSNESS; DESIDERATA; ROBOT;
D O I
10.1007/s10846-024-02064-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in AI and deep learning have created a growing demand for artificial agents capable of performing tasks within increasingly complex environments. To address the challenges associated with continuous learning constraints and knowledge capacity in this context, cognitive architectures inspired by human cognition have gained significance. This study contributes to existing research by introducing a cognitive-attentional system employing a constructive neural network-based learning approach for continuous acquisition of procedural knowledge. We replace an incremental tabular Reinforcement Learning algorithm with a constructive neural network deep reinforcement learning mechanism for continuous sensorimotor knowledge acquisition, thereby enhancing the overall learning capacity. The primary emphasis of this modification centers on optimizing memory utilization and reducing training time. Our study presents a learning strategy that amalgamates deep reinforcement learning with procedural learning, mirroring the incremental learning process observed in human sensorimotor development. This approach is embedded within the CONAIM cognitive-attentional architecture, leveraging the cognitive tools of CST. The proposed learning mechanism allows the model to dynamically create and modify elements in its procedural memory, facilitating the reuse of previously acquired functions and procedures. Additionally, it equips the model with the capability to combine learned elements to effectively adapt to complex scenarios. A constructive neural network was employed, initiating with an initial hidden layer comprising one neuron. However, it possesses the capacity to adapt its internal architecture in response to its performance in procedural and sensorimotor learning tasks, inserting new hidden layers or neurons. Experimentation conducted through simulations involving a humanoid robot demonstrates the successful resolution of tasks that were previously unsolved through incremental knowledge acquisition. Throughout the training phase, the constructive agent achieved a minimum of 40% greater rewards and executed 8% more actions when compared to other agents. In the subsequent testing phase, the constructive agent exhibited a 15% increase in the number of actions performed in contrast to its counterparts.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
    Lin, Yen-Chen
    Hong, Zhang-Wei
    Liao, Yuan-Hong
    Shih, Meng-Li
    Liu, Ming-Yu
    Sun, Min
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3756 - 3762
  • [22] Deep Reinforcement Learning Agents for Decision Making for Gameplay
    Heaton, Jacqueline
    Givigi, Sidney
    18TH ANNUAL IEEE INTERNATIONAL SYSTEMS CONFERENCE, SYSCON 2024, 2024,
  • [23] Testing of Deep Reinforcement Learning Agents with Surrogate Models
    Biagiola, Matteo
    Tonella, Paolo
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (03)
  • [24] Application of Deep Reinforcement Learning in Werewolf Game Agents
    Wang, Tianhe
    Kaneko, Tomoyuki
    2018 CONFERENCE ON TECHNOLOGIES AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE (TAAI), 2018, : 28 - 33
  • [25] Navigational Behavior of Humans and Deep Reinforcement Learning Agents
    Rigoli, Lillian M.
    Patil, Gaurav
    Stening, Hamish F.
    Kallen, Rachel W.
    Richardson, Michael J.
    FRONTIERS IN PSYCHOLOGY, 2021, 12
  • [26] On the development of autonomous agents using deep reinforcement learning
    Barbu, Clara
    Mocanu, Ștefan Alexandru
    UPB Scientific Bulletin, Series C: Electrical Engineering and Computer Science, 2021, 83 (03): : 97 - 116
  • [27] The Advance of Reinforcement Learning and Deep Reinforcement Learning
    Lyu, Le
    Shen, Yang
    Zhang, Sicheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, BIG DATA AND ALGORITHMS (EEBDA), 2022, : 644 - 648
  • [28] Counterfactual state explanations for reinforcement learning agents via generative deep learning
    Olson, Matthew L.
    Khanna, Roli
    Neal, Lawrence
    Li, Fuxin
    Wong, Weng-Keen
    ARTIFICIAL INTELLIGENCE, 2021, 295
  • [29] Deep Reinforcement Learning in Agents' Training: Unity ML-Agents
    Almon-Manzano, Laura
    Pastor-Vargas, Rafael
    Troncoso, Jose Manuel Cuadra
    BIO-INSPIRED SYSTEMS AND APPLICATIONS: FROM ROBOTICS TO AMBIENT INTELLIGENCE, PT II, 2022, 13259 : 391 - 400
  • [30] Deep reinforcement learning agents for dynamic spectrum access in television whitespace cognitive radio networks
    Ukpong, Udeme C.
    Idowu-Bismark, Olabode
    Adetiba, Emmanuel
    Kala, Jules R.
    Owolabi, Emmanuel
    Oshin, Oluwadamilola
    Abayomi, Abdultaofeek
    Dare, Oluwatobi E.
    SCIENTIFIC AFRICAN, 2025, 27