Navigation method for autonomous mobile robots based on ROS and multi-robot improved Q-learning

被引:0
|
作者
Hamed, Oussama [1 ]
Hamlich, Mohamed [1 ]
机构
[1] Hassan II Univ, CCPS Lab, ENSAM C, Casablanca, Morocco
关键词
Multi-robot system; Reinforcement learning; Path planning; Robot operating system; Autonomous mobile robot;
D O I
10.1007/s13748-024-00320-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, path planning of multi-autonomous mobile robot systems is one of the interesting topics in scientific research due to its complexity and its wide use in many fields, such as modern industry, war field, and logistics. Q-learning algorithm which is a sort of reinforcement learning is a widely used method in autonomous mobile robot path planning, thanks to its capacity of learning by itself in any environment without the need for prior knowledge. To increase the convergence speed of the Q-learning algorithm and adapt it to robotics and multi-robot systems, the Multi-Robot Improved Q-Learning algorithm (MRIQL) is proposed. The Artificial Potential Field algorithm (APF) is used to initialize the Q-learning. During learning, a restricting mechanism is used to prevent unnecessary actions while exploring. This Improved Q-learning algorithm is adapted to multi-robot system path planning by controlling and adjusting the policies of the robots to generate an optimal and collision-free path for each robot. We introduce a simulation environment for mobile robots based on Robot Operating System (ROS) and Gazebo. The experimental results and the simulation demonstrate the validity and the efficiency of the proposed algorithm.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] An improved Q-learning algorithm for an autonomous mobile robot navigation problem
    Muhammad, Jawad
    Bucak, Ihsan Omur
    [J]. 2013 INTERNATIONAL CONFERENCE ON TECHNOLOGICAL ADVANCES IN ELECTRICAL, ELECTRONICS AND COMPUTER ENGINEERING (TAEECE), 2013, : 239 - 243
  • [2] Q-learning based univector field navigation method for mobile robots
    Vien, Ngo Anh
    Viet, Nguyen Hoang
    Park, HyunJeong
    Lee, SeungGwan
    Chung, TaeChoong
    [J]. ADVANCES AND INNOVATIONS IN SYSTEMS, COMPUTING SCIENCES AND SOFTWARE ENGINEERING, 2007, : 463 - +
  • [3] An autonomous navigation method for mobile robot based on ROS
    Wang, Shuyu
    Wang, Kai
    He, Hong
    [J]. 2019 WORLD ROBOT CONFERENCE SYMPOSIUM ON ADVANCED ROBOTICS AND AUTOMATION (WRC SARA 2019), 2019, : 284 - 290
  • [4] Mobile robot Navigation Based on Q-Learning Technique
    Khriji, Lazhar
    Touati, Farid
    Benhmed, Kamel
    Al-Yahmedi, Amur
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2011, 8 (01): : 45 - 51
  • [5] Neural Q-Learning Based Mobile Robot Navigation
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    Joe, Halim Kusuma
    [J]. MATERIALS SCIENCE AND INFORMATION TECHNOLOGY, PTS 1-8, 2012, 433-440 : 721 - +
  • [6] Mobile Robot Navigation: Neural Q-Learning
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    [J]. ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY, VOL 3, 2013, 178 : 259 - +
  • [7] Mobile robot navigation: neural Q-learning
    Parasuraman, S.
    Yun, Soh Chin
    [J]. INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2012, 44 (04) : 303 - 311
  • [8] An Event-based Probabilistic Q-learning Method for Navigation Control of Mobile Robots
    Xu, Dongdong
    Zhu, Zhangqing
    Chen, Chunlin
    [J]. 2014 11TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2014, : 587 - 592
  • [9] Cyclic error correction based Q-learning for mobile robots navigation
    Rongkuan Tang
    Hongliang Yuan
    [J]. International Journal of Control, Automation and Systems, 2017, 15 : 1790 - 1798
  • [10] Autonomous Navigation based on a Q-learning algorithm for a Robot in a Real Environment
    Strauss, Clement
    Sahin, Ferat
    [J]. 2008 IEEE INTERNATIONAL CONFERENCE ON SYSTEM OF SYSTEMS ENGINEERING (SOSE), 2008, : 361 - 365