Knowledge-based Exploration for Reinforcement Learning in Self-Organizing Neural Networks

被引:10
|
作者
Teng, Teck-Hou [1 ]
Tan, Ah-Hwee [1 ]
机构
[1] Nanyang Technol Univ, Sch Comp Engn, Singapore 639798, Singapore
关键词
Reinforcement Learning; Self-Organizing Neural Network; Directed Exploration; Rule-Based System; ARCHITECTURE; PURSUIT; EVASION;
D O I
10.1109/WI-IAT.2012.154
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Exploration is necessary during reinforcement learning to discover new solutions in a given problem space. Most reinforcement learning systems, however, adopt a simple strategy, by randomly selecting an action among all the available actions. This paper proposes a novel exploration strategy, known as Knowledge-based Exploration, for guiding the exploration of a family of self-organizing neural networks in reinforcement learning. Specifically, exploration is directed towards unexplored and favorable action choices while steering away from those negative action choices that are likely to fail. This is achieved by using the learned knowledge of the agent to identify prior action choices leading to low Q-values in similar situations. Consequently, the agent is expected to learn the right solutions in a shorter time, improving overall learning efficiency. Using a Pursuit-Evasion problem domain, we evaluate the efficacy of the knowledge-based exploration strategy, in terms of task performance, rate of learning and model complexity. Comparison with random exploration and three other heuristic-based directed exploration strategies show that Knowledge-based Exploration is significantly more effective and robust for reinforcement learning in real time.
引用
收藏
页码:332 / 339
页数:8
相关论文
共 50 条
  • [1] Probabilistic Guided Exploration for Reinforcement Learning in Self-Organizing Neural Networks
    Wang, Peng
    Zhou, Weigui Jair
    Wang, Di
    Tan, Ah-Hwee
    2018 IEEE INTERNATIONAL CONFERENCE ON AGENTS (ICA), 2018, : 109 - 112
  • [2] Self-Organizing Neural Networks Integrating Domain Knowledge and Reinforcement Learning
    Teng, Teck-Hou
    Tan, Ah-Hwee
    Zurada, Jacek M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 26 (05) : 889 - 902
  • [3] Self-organizing neural architecture for reinforcement learning
    Tan, Ah-Hwee
    ADVANCES IN NEURAL NETWORKS - ISNN 2006, PT 1, 2006, 3971 : 470 - 475
  • [4] Direct Code Access in Self-Organizing Neural Networks for Reinforcement Learning
    Tan, Ah-Hwee
    20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007, : 1071 - 1076
  • [5] Fast Reinforcement Learning under Uncertainties with Self-Organizing Neural Networks
    Teng, Teck-Hou
    Tan, Ah-Hwee
    2015 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY (WI-IAT), VOL 2, 2015, : 51 - 58
  • [6] Knowledge-based recurrent neural networks in reinforcement learning
    Le, Tien Dung
    Komeda, Takashi
    Takagi, Motoki
    PROCEDINGS OF THE 11TH IASTED INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, 2007, : 169 - 174
  • [7] Projection learning for self-organizing neural networks
    Potlapalli, H
    Luo, RC
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 1996, 43 (04) : 485 - 491
  • [8] Self-organizing maps for storage and transfer of knowledge in reinforcement learning
    Karimpanal, Thommen George
    Bouffanais, Roland
    ADAPTIVE BEHAVIOR, 2019, 27 (02) : 111 - 126
  • [9] Self-Organizing Neural Models Integrating Rules and Reinforcement Learning
    Teng, Teck-Hou
    Tan, Zhong-Ming
    Tan, Ah-Hwee
    2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 3771 - +
  • [10] Comparing Knowledge-Based Reinforcement Learning to Neural Networks in a Strategy Game
    Nechepurenko, Liudmyla
    Voss, Viktor
    Gritsenko, Vyacheslav
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, HAIS 2020, 2020, 12344 : 312 - 328