Comparison of Deep Q-Learning, Q-Learning and SARSA Reinforced Learning for Robot Local Navigation

被引:1
|
作者
Anas, Hafiq [1 ]
Ong, Wee Hong [1 ]
Malik, Owais Ahmed [1 ]
机构
[1] Univ Brunei Darussalam, Sch Digital Sci, Jalan Tungku Link, Gadong, Brunei
关键词
Deep reinforcement learning; Mobile robot navigation; Obstacle avoidance;
D O I
10.1007/978-3-030-97672-9_40
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a performance comparison of mobile robot obstacle avoidance between using Deep Reinforcement Learning (DRL) and two classical Reinforcement Learning (RL). For the DRL-based method, Deep Q-Learning (DQN) algorithm was used whereas for the RL-based method, Q-Learning and Sarsa algorithms were used. In our experiments, we have used the extended OpenAl Gym ToolKit to compare the performances of DQN, Q-Learning, and Sarsa algorithms in both simulated and real-world environments. Turtlebot3 Burger was used as the mobile robot hardware to evaluate the performance of the RL models in the real-world environment. The average rewards, episode steps, and rate of successful navigation were used to compare the performance of the navigation ability of the RL agents. Based on the simulated and real-world results, DQN has performed significantly better than both Q-Learning and Sarsa. It has achieved 100% success rates during the simulated and real-world tests.
引用
收藏
页码:443 / 454
页数:12
相关论文
共 50 条
  • [41] Neural Q-learning
    Stephan ten Hagen
    Ben Kröse
    [J]. Neural Computing & Applications, 2003, 12 : 81 - 88
  • [42] Neural Q-learning
    ten Hagen, S
    Kröse, B
    [J]. NEURAL COMPUTING & APPLICATIONS, 2003, 12 (02): : 81 - 88
  • [43] Logistic Q-Learning
    Bas-Serrano, Joan
    Curi, Sebastian
    Krause, Andreas
    Neu, Gergely
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [44] Q-learning and robotics
    Touzet, CF
    Santos, JM
    [J]. SIMULATION IN INDUSTRY 2001, 2001, : 685 - 689
  • [45] An improved Q-learning algorithm for an autonomous mobile robot navigation problem
    Muhammad, Jawad
    Bucak, Ihsan Omur
    [J]. 2013 INTERNATIONAL CONFERENCE ON TECHNOLOGICAL ADVANCES IN ELECTRICAL, ELECTRONICS AND COMPUTER ENGINEERING (TAEECE), 2013, : 239 - 243
  • [46] Topological Q-learning with internally guided exploration for mobile robot navigation
    Hafez, Muhammad Burhan
    Loo, Chu Kiong
    [J]. NEURAL COMPUTING & APPLICATIONS, 2015, 26 (08): : 1939 - 1954
  • [47] Autonomous Navigation based on a Q-learning algorithm for a Robot in a Real Environment
    Strauss, Clement
    Sahin, Ferat
    [J]. 2008 IEEE INTERNATIONAL CONFERENCE ON SYSTEM OF SYSTEMS ENGINEERING (SOSE), 2008, : 361 - 365
  • [48] Reactive fuzzy controller design by Q-learning for mobile robot navigation
    张文志
    吕恬生
    [J]. Journal of Harbin Institute of Technology(New series), 2005, (03) : 319 - 324
  • [49] A quantum-inspired Q-learning algorithm for indoor robot navigation
    Chen, Chunlin
    Yang, Pei
    Zhou, Xianzhong
    Dong, Daoyi
    [J]. PROCEEDINGS OF 2008 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, SENSING AND CONTROL, VOLS 1 AND 2, 2008, : 1599 - +
  • [50] Topological Q-learning with internally guided exploration for mobile robot navigation
    Muhammad Burhan Hafez
    Chu Kiong Loo
    [J]. Neural Computing and Applications, 2015, 26 : 1939 - 1954