Comparison of Deep Q-Learning, Q-Learning and SARSA Reinforced Learning for Robot Local Navigation

被引:1
|
作者
Anas, Hafiq [1 ]
Ong, Wee Hong [1 ]
Malik, Owais Ahmed [1 ]
机构
[1] Univ Brunei Darussalam, Sch Digital Sci, Jalan Tungku Link, Gadong, Brunei
关键词
Deep reinforcement learning; Mobile robot navigation; Obstacle avoidance;
D O I
10.1007/978-3-030-97672-9_40
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a performance comparison of mobile robot obstacle avoidance between using Deep Reinforcement Learning (DRL) and two classical Reinforcement Learning (RL). For the DRL-based method, Deep Q-Learning (DQN) algorithm was used whereas for the RL-based method, Q-Learning and Sarsa algorithms were used. In our experiments, we have used the extended OpenAl Gym ToolKit to compare the performances of DQN, Q-Learning, and Sarsa algorithms in both simulated and real-world environments. Turtlebot3 Burger was used as the mobile robot hardware to evaluate the performance of the RL models in the real-world environment. The average rewards, episode steps, and rate of successful navigation were used to compare the performance of the navigation ability of the RL agents. Based on the simulated and real-world results, DQN has performed significantly better than both Q-Learning and Sarsa. It has achieved 100% success rates during the simulated and real-world tests.
引用
收藏
页码:443 / 454
页数:12
相关论文
共 50 条
  • [1] Backward Q-learning: The combination of Sarsa algorithm and Q-learning
    Wang, Yin-Hao
    Li, Tzuu-Hseng S.
    Lin, Chih-Jui
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2013, 26 (09) : 2184 - 2193
  • [2] Deep Reinforcement Learning: From Q-Learning to Deep Q-Learning
    Tan, Fuxiao
    Yan, Pengfei
    Guan, Xinping
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV, 2017, 10637 : 475 - 483
  • [3] Deep Reinforcement Learning with Sarsa and Q-Learning: A Hybrid Approach
    Xu, Zhi-xiong
    Cao, Lei
    Chen, Xi-liang
    Li, Chen-xi
    Zhang, Yong-liang
    Lai, Jun
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2018, E101D (09) : 2315 - 2322
  • [4] Application of Deep Q-Learning for Wheel Mobile Robot Navigation
    Mohanty, Prases K.
    Sah, Arun Kumar
    Kumar, Vikas
    Kundu, Shubhasri
    [J]. 2017 3RD INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND NETWORKS (CINE), 2017, : 88 - 93
  • [5] Mobile Robot Navigation: Neural Q-Learning
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    [J]. ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY, VOL 3, 2013, 178 : 259 - +
  • [6] Mobile robot navigation: neural Q-learning
    Parasuraman, S.
    Yun, Soh Chin
    [J]. INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2012, 44 (04) : 303 - 311
  • [7] Path Navigation For Indoor Robot With Q-Learning
    Huang, Lvwen
    He, Dongjian
    Zhang, Zhiyong
    Zhang, Peng
    [J]. INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2016, 22 (02): : 317 - 323
  • [8] Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning
    Ohnishi, Shota
    Uchibe, Eiji
    Yamaguchi, Yotaro
    Nakanishi, Kosuke
    Yasui, Yuji
    Ishii, Shin
    [J]. FRONTIERS IN NEUROROBOTICS, 2019, 13
  • [9] A Hybrid Fuzzy Q-Learning algorithm for robot navigation
    Gordon, Sean W.
    Reyes, Napoleon H.
    Barczak, Andre
    [J]. 2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 2625 - 2631
  • [10] Mobile robot Navigation Based on Q-Learning Technique
    Khriji, Lazhar
    Touati, Farid
    Benhmed, Kamel
    Al-Yahmedi, Amur
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2011, 8 (01): : 45 - 51