Modified Q-learning with distance metric and virtual target on path planning of mobile robot

被引:31
|
作者
Low, Ee Soong [1 ]
Ong, Pauline [1 ]
Low, Cheng Yee [1 ]
Omar, Rosli [2 ]
机构
[1] Univ Tun Hussein Onn Malaysia UTHM, Fac Mech & Mfg Engn, Batu Pahat 86400, Johor, Malaysia
[2] Univ Tun Hussein Onn Malaysia UTHM, Fac Elect & Elect Engn, Batu Pahat 86400, Johor, Malaysia
关键词
Moving target; Obstacle avoidance; Path planning; Q-learning; reinforcement learning; Mobile robot; ALGORITHM; OPTIMIZATION;
D O I
10.1016/j.eswa.2022.117191
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Path planning is an essential element in mobile robot navigation. One of the popular path planners is Q-learning - a type of reinforcement learning that learns with little or no prior knowledge of the environment. Despite the successful implementation of Q-learning reported in numerous studies, its slow convergence associated with the curse of dimensionality may limit the performance in practice. To solve this problem, an Improved Q-learning (IQL) with three modifications is introduced in this study. First, a distance metric is added to Q-learning to guide the agent moves towards the target. Second, the Q function of Q-learning is modified to overcome dead-ends more effectively. Lastly, the virtual target concept is introduced in Q-learning to bypass dead-ends. Experimental results across twenty types of navigation maps show that the proposed strategies accelerate the learning speed of IQL in comparison with the Q-learning. Besides, performance comparison with seven well-known path planners indicates its efficiency in terms of the path smoothness, time taken, shortest distance and total distance used.
引用
收藏
页数:40
相关论文
共 50 条
  • [31] Optimal path planning approach based on Q-learning algorithm for mobile robots
    Maoudj, Abderraouf
    Hentout, Abdelfetah
    [J]. APPLIED SOFT COMPUTING, 2020, 97 (97)
  • [32] Mobile Robot Navigation: Neural Q-Learning
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    [J]. ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY, VOL 3, 2013, 178 : 259 - +
  • [33] Neural Q-Learning Controller for Mobile Robot
    Ganapathy, Velappa
    Yun, Soh Chin
    Joe, Halim Kusama
    [J]. 2009 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, VOLS 1-3, 2009, : 863 - 868
  • [34] Mobile robot navigation: neural Q-learning
    Parasuraman, S.
    Yun, Soh Chin
    [J]. INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2012, 44 (04) : 303 - 311
  • [35] Mobile robot path planning based on improved Q learning algorithm
    Peng, Jiansheng
    [J]. International Journal of Multimedia and Ubiquitous Engineering, 2015, 10 (07): : 285 - 294
  • [36] Study on adaptive path planning for mobile robot based on Q learning
    Li, Caihong
    Li, Yibin
    Zhang, Zijian
    Song, Rui
    [J]. WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 3939 - +
  • [37] The Experience-Memory Q-Learning Algorithm for Robot Path Planning in Unknown Environment
    Zhao, Meng
    Lu, Hui
    Yang, Siyi
    Guo, Fengjuan
    [J]. IEEE ACCESS, 2020, 8 : 47824 - 47844
  • [38] High-Level Path Planning for an Autonomous Sailboat Robot Using Q-Learning
    da Silva Junior, Andouglas Goncalves
    dos Santos, Davi Henrique
    Fernandes de Negreiros, Alvaro Pinto
    Boas de Souza Silva, Joao Moreno Vilas
    Garcia Goncalves, Luiz Marcos
    [J]. SENSORS, 2020, 20 (06)
  • [39] Car-Like Robot Path Planning Based on Voronoi and Q-Learning Algorithms
    Alhassow, Mustafa Mohammed
    Ata, Oguz
    Atilla, Dogu Cagdas
    [J]. 2021 7TH INTERNATIONAL CONFERENCE ON ENGINEERING AND EMERGING TECHNOLOGIES (ICEET 2021), 2021, : 591 - 594
  • [40] Reinforcement learning with modified exploration strategy for mobile robot path planning
    Khlif, Nesrine
    Nahla, Khraief
    Safya, Belghith
    [J]. ROBOTICA, 2023, 41 (09) : 2688 - 2702