Multirobot Coverage Path Planning Based on Deep Q-Network in Unknown Environment

被引:1
|
作者
Li, Wenhao [1 ]
Zhao, Tao [1 ]
Dian, Songyi [1 ]
机构
[1] Sichuan Univ, Coll Elect Engn, Chengdu 610065, Peoples R China
关键词
ALGORITHM;
D O I
10.1155/2022/6825902
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Aiming at the problems of security, high repetition rate, and many restrictions of multirobot coverage path planning (MCPP) in an unknown environment, Deep Q-Network (DQN) is selected as a part of the method in this paper after considering its powerful approximation ability to the optimal action value function. Then, a deduction method and some environments handling methods are proposed to improve the performance of the decision-making stage. The deduction method assumes the movement direction of each robot and counts the reward value obtained by the robots in this way and then determines the actual movement directions combined with DQN. For these reasons, the whole algorithm is divided into two parts: offline training and online decision-making. Online decision-making relies on the sliding-view method and probability statistics to deal with the nonstandard size and unknown environments and the deduction method to improve the efficiency of coverage. Simulation results show that the performance of the proposed online method is close to that of the offline algorithm which needs long time optimization, and the proposed method is more stable as well. Some performance defects of current MCPP methods in an unknown environment are ameliorated in this study.
引用
收藏
页数:15
相关论文
共 50 条
  • [11] Dynamic Path Planning Scheme for OHT in AMHS Based on Map Information Double Deep Q-Network
    Ao, Qi
    Zhou, Yue
    Guo, Wei
    Wang, Wenguang
    Ye, Ying
    ELECTRONICS, 2024, 13 (22)
  • [12] Path Planning for Mobile Robot Considering Turnabouts on Narrow Road by Deep Q-Network
    Nakamura, Tomoaki
    Kobayashi, Masato
    Motoi, Naoki
    IEEE ACCESS, 2023, 11 : 19111 - 19121
  • [13] Effective Lazy Training Method for Deep Q-Network in Obstacle Avoidance and Path Planning
    Wu, Juan
    Shin, Seabyuk
    Kim, Cheong-Gil
    Kim, Shin-Dug
    2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 1799 - 1804
  • [14] Improved Double Deep Q-Network Algorithm Applied to Multi-Dimensional Environment Path Planning of Hexapod Robots
    Chen, Liuhongxu
    Wang, Qibiao
    Deng, Chao
    Xie, Bo
    Tuo, Xianguo
    Jiang, Gang
    SENSORS, 2024, 24 (07)
  • [15] UAV Trajectory Planning Based on Deep Q-Network for Internet of Things
    Zhang Jianhang
    Kang Kai
    Qian Hua
    Yang Miao
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2022, 44 (11) : 3850 - 3857
  • [16] B-APFDQN: A UAV Path Planning Algorithm Based on Deep Q-Network and Artificial Potential Field
    Kong, Fuchen
    Wang, Qi
    Gao, Shang
    Yu, Hualong
    IEEE ACCESS, 2023, 11 : 44051 - 44064
  • [17] Noisy Dueling Double Deep Q-Network algorithm for autonomous underwater vehicle path planning
    Liao, Xu
    Li, Le
    Huang, Chuangxia
    Zhao, Xian
    Tan, Shumin
    FRONTIERS IN NEUROROBOTICS, 2024, 18
  • [18] Path Planning Method for Mobile Robot Based on Curiosity Distillation Double Q-Network
    Zhang, Feng
    Gu, Qiran
    Yuan, Shuai
    Computer Engineering and Applications, 2023, 59 (19) : 316 - 322
  • [19] AGV dispatching algorithm based on deep Q-network in CNC machines environment
    Chang, Kyuchang
    Park, Seung Hwan
    Baek, Jun-Geol
    INTERNATIONAL JOURNAL OF COMPUTER INTEGRATED MANUFACTURING, 2022, 35 (06) : 662 - 677
  • [20] Asynchronous Deep Q-network in Continuous Environment Based on Prioritized Experience Replay
    Liu, Hongda
    Zhang, Hanqi
    Gong, Linying
    2019 2ND INTERNATIONAL CONFERENCE ON MECHANICAL ENGINEERING, INDUSTRIAL MATERIALS AND INDUSTRIAL ELECTRONICS (MEIMIE 2019), 2019, : 472 - 477