Navigation method for autonomous mobile robots based on ROS and multi-robot improved Q-learning

被引:0
|
作者
Hamed, Oussama [1 ]
Hamlich, Mohamed [1 ]
机构
[1] Hassan II Univ, CCPS Lab, ENSAM C, Casablanca, Morocco
关键词
Multi-robot system; Reinforcement learning; Path planning; Robot operating system; Autonomous mobile robot;
D O I
10.1007/s13748-024-00320-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, path planning of multi-autonomous mobile robot systems is one of the interesting topics in scientific research due to its complexity and its wide use in many fields, such as modern industry, war field, and logistics. Q-learning algorithm which is a sort of reinforcement learning is a widely used method in autonomous mobile robot path planning, thanks to its capacity of learning by itself in any environment without the need for prior knowledge. To increase the convergence speed of the Q-learning algorithm and adapt it to robotics and multi-robot systems, the Multi-Robot Improved Q-Learning algorithm (MRIQL) is proposed. The Artificial Potential Field algorithm (APF) is used to initialize the Q-learning. During learning, a restricting mechanism is used to prevent unnecessary actions while exploring. This Improved Q-learning algorithm is adapted to multi-robot system path planning by controlling and adjusting the policies of the robots to generate an optimal and collision-free path for each robot. We introduce a simulation environment for mobile robots based on Robot Operating System (ROS) and Gazebo. The experimental results and the simulation demonstrate the validity and the efficiency of the proposed algorithm.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] A modified Q-learning algorithm for multi-robot decision making
    Wang, Ying
    de Silva, Clarence W.
    [J]. PROCEEDINGS OF THE ASME INTERNATIONAL MECHANICAL ENGINERING CONGRESS AND EXPOSITION 2007, VOL 9, PTS A-C: MECHANICAL SYSTEMS AND CONTROL, 2008, : 1275 - 1281
  • [22] Q-Learning for autonomous vehicle navigation
    Gonzalez-Miranda, Oscar
    Miranda, Luis Antonio Lopez
    Ibarra-Zannatha, Juan Manuel
    [J]. 2023 XXV ROBOTICS MEXICAN CONGRESS, COMROB, 2023, : 138 - 142
  • [23] Based on A* and q-learning search and rescue robot navigation
    Pang, Tao
    Ruan, Xiaogang
    Wang, Ershen
    Fan, Ruiyuan
    [J]. Telkomnika - Indonesian Journal of Electrical Engineering, 2012, 10 (07): : 1889 - 1896
  • [24] Q-learning based method of adaptive path planning for mobile robot
    Li, Yibin
    Li, Caihong
    Zhang, Zijian
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON INFORMATION ACQUISITION, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2006, : 983 - 987
  • [25] Path planning for autonomous mobile robot using transfer learning-based Q-learning
    Wu, Shengshuai
    Hu, Jinwen
    Zhao, Chunhui
    Pan, Quan
    [J]. PROCEEDINGS OF 2020 3RD INTERNATIONAL CONFERENCE ON UNMANNED SYSTEMS (ICUS), 2020, : 88 - 93
  • [26] Topological Q-learning with internally guided exploration for mobile robot navigation
    Hafez, Muhammad Burhan
    Loo, Chu Kiong
    [J]. NEURAL COMPUTING & APPLICATIONS, 2015, 26 (08): : 1939 - 1954
  • [27] Reactive fuzzy controller design by Q-learning for mobile robot navigation
    张文志
    吕恬生
    [J]. Journal of Harbin Institute of Technology(New series), 2005, (03) : 319 - 324
  • [28] Reactive fuzzy controller design by Q-learning for mobile robot navigation
    Zhang, Wen-Zhi
    Lu, Tian-Sheng
    [J]. Journal of Harbin Institute of Technology (New Series), 2005, 12 (03) : 319 - 324
  • [29] Topological Q-learning with internally guided exploration for mobile robot navigation
    Muhammad Burhan Hafez
    Chu Kiong Loo
    [J]. Neural Computing and Applications, 2015, 26 : 1939 - 1954
  • [30] A deterministic improved Q-learning for path planning of a mobile robot
    [J]. 1600, Institute of Electrical and Electronics Engineers Inc. (43):