Topological Q-learning with internally guided exploration for mobile robot navigation

被引:14
|
作者
Hafez, Muhammad Burhan [1 ]
Loo, Chu Kiong [1 ]
机构
[1] Univ Malaya, Fac Comp Sci & Informat Technol, Kuala Lumpur 50603, Malaysia
来源
NEURAL COMPUTING & APPLICATIONS | 2015年 / 26卷 / 08期
关键词
Reinforcement learning; Q-learning; Convergence acceleration; Topological map; Guided exploration;
D O I
10.1007/s00521-015-1861-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Improving the learning convergence of reinforcement learning (RL) in mobile robot navigation has been the interest of many recent works that have investigated different approaches to obtain knowledge from effectively and efficiently exploring the robot's environment. In RL, this knowledge is of great importance for reducing the high number of interactions required for updating the value function and to eventually find an optimal or a nearly optimal policy for the agent. In this paper, we propose a topological Q-learning (TQ-learning) algorithm that makes use of the topological ordering among the observed states of the environment in which the agent acts. This algorithm builds an incremental topological map of the environment using Instantaneous Topological Map model which we use for accelerating value function updates as well as providing a guided exploration strategy for the agent. We evaluate our algorithm against the original Q-learning and the Influence Zone algorithms in static and dynamic environments.
引用
收藏
页码:1939 / 1954
页数:16
相关论文
共 50 条
  • [1] Topological Q-learning with internally guided exploration for mobile robot navigation
    Muhammad Burhan Hafez
    Chu Kiong Loo
    [J]. Neural Computing and Applications, 2015, 26 : 1939 - 1954
  • [2] Mobile Robot Navigation: Neural Q-Learning
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    [J]. ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY, VOL 3, 2013, 178 : 259 - +
  • [3] Mobile robot navigation: neural Q-learning
    Parasuraman, S.
    Yun, Soh Chin
    [J]. INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2012, 44 (04) : 303 - 311
  • [4] Mobile robot Navigation Based on Q-Learning Technique
    Khriji, Lazhar
    Touati, Farid
    Benhmed, Kamel
    Al-Yahmedi, Amur
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2011, 8 (01): : 45 - 51
  • [5] Mobile robot navigation using neural Q-learning
    Yang, GS
    Chen, EK
    An, CW
    [J]. PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2004, : 48 - 52
  • [6] Neural Q-Learning Based Mobile Robot Navigation
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    Joe, Halim Kusuma
    [J]. MATERIALS SCIENCE AND INFORMATION TECHNOLOGY, PTS 1-8, 2012, 433-440 : 721 - +
  • [7] Autonomous Exploration for Mobile Robot using Q-learning
    Liu, Yang
    Liu, Huaping
    Wang, Bowen
    [J]. 2017 2ND INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM), 2017, : 614 - 619
  • [8] Application of Deep Q-Learning for Wheel Mobile Robot Navigation
    Mohanty, Prases K.
    Sah, Arun Kumar
    Kumar, Vikas
    Kundu, Shubhasri
    [J]. 2017 3RD INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND NETWORKS (CINE), 2017, : 88 - 93
  • [9] An improved Q-learning algorithm for an autonomous mobile robot navigation problem
    Muhammad, Jawad
    Bucak, Ihsan Omur
    [J]. 2013 INTERNATIONAL CONFERENCE ON TECHNOLOGICAL ADVANCES IN ELECTRICAL, ELECTRONICS AND COMPUTER ENGINEERING (TAEECE), 2013, : 239 - 243
  • [10] Reactive fuzzy controller design by Q-learning for mobile robot navigation
    张文志
    吕恬生
    [J]. Journal of Harbin Institute of Technology(New series), 2005, (03) : 319 - 324