Swarm Reinforcement Learning Method Based on Hierarchical Q-Learning

被引:0
|
作者
Kuroe, Yasuaki [1 ]
Takeuchi, Kenya [1 ]
Maeda, Yutaka [1 ]
机构
[1] Kansai Univ, Fac Engn Sci, Suita, Osaka, Japan
基金
日本学术振兴会;
关键词
reinforcement learning method; partially observed Markov decision process; hierarchical Q-learning; swarm intelligence;
D O I
10.1109/SSCI50451.2021.9659877
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In last decades the reinforcement learning method has attracted a great deal of attention and many studies have been done. However, this method is basically a trial-and-error scheme and it takes much computational time to acquire optimal strategies. Furthermore, optimal strategies may not be obtained for large and complicated problems with many states. To resolve these problems we have proposed the swarm reinforcement learning method, which is developed inspired by the multi-point search optimization methods. The Swarm reinforcement learning method has been extensively studied and its effectiveness has been confirmed for several problems, especially for Markov decision processes where the agents can fully observe the states of environments. In many real-world problems, however, the agents cannot fully observe the environments and they are usually partially observable Markov decision processes (POMDPs). The purpose of this paper is to develop a swarm reinforcement learning method which can deal with POMDPs. We propose a swarm reinforcement learning method based on HQ-learning, which is a hierarchical extension of Q-learning. It is shown through experiments that the proposed method can handle POMDPs and possesses higher performance than that of the original HQ-learning.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] A type of Q-learning method based on Elman network
    Liu Chang-you
    Sun Guang-yu
    Proceedings of 2004 Chinese Control and Decision Conference, 2004, : 562 - 564
  • [32] Cooperative Q-Learning Based on Learning Automata
    Yang, Mao
    Tian, Yantao
    Qi, Xinyue
    2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 1972 - 1977
  • [33] Safe Q-Learning Approaches for Human-in-Loop Reinforcement Learning
    Veerabathraswamy, Swathi
    Bhatt, Nirav
    2023 NINTH INDIAN CONTROL CONFERENCE, ICC, 2023, : 16 - 21
  • [34] Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
    Weissenbacher, Matthias
    Sinha, Samarth
    Garg, Animesh
    Kawahara, Yoshinobu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [35] Comparing NARS and Reinforcement Learning: An Analysis of ONA and Q-Learning Algorithms
    Beikmohammadi, Ali
    Magnusson, Sindri
    ARTIFICIAL GENERAL INTELLIGENCE, AGI 2023, 2023, 13921 : 21 - 31
  • [36] Multi-Agent Reinforcement Learning - An Exploration Using Q-Learning
    Graham, Caoimhin
    Bell, David
    Luo, Zhihui
    RESEARCH AND DEVELOPMENT IN INTELLIGENT SYSTEMS XXVI: INCORPORATING APPLICATIONS AND INNOVATIONS IN INTELLIGENT SYSTEMS XVII, 2010, : 293 - 298
  • [37] Improving the efficiency of reinforcement learning for a spacecraft powered descent with Q-learning
    Wilson, Callum
    Riccardi, Annalisa
    OPTIMIZATION AND ENGINEERING, 2023, 24 (01) : 223 - 255
  • [38] Autonomous Driving in Roundabout Maneuvers Using Reinforcement Learning with Q-Learning
    Garcia Cuenca, Laura
    Puertas, Enrique
    Fernandez Andres, Javier
    Aliane, Nourdine
    ELECTRONICS, 2019, 8 (12)
  • [39] An inverse reinforcement learning framework with the Q-learning mechanism for the metaheuristic algorithm
    Zhao, Fuqing
    Wang, Qiaoyun
    Wang, Ling
    KNOWLEDGE-BASED SYSTEMS, 2023, 265
  • [40] Swarm Reinforcement Learning Algorithms Based on Sarsa Method
    Iima, Hitoshi
    Kuroe, Yasuaki
    2008 PROCEEDINGS OF SICE ANNUAL CONFERENCE, VOLS 1-7, 2008, : 1963 - 1967