Multirobot Coverage Path Planning Based on Deep Q-Network in Unknown Environment

被引:1
|
作者
Li, Wenhao [1 ]
Zhao, Tao [1 ]
Dian, Songyi [1 ]
机构
[1] Sichuan Univ, Coll Elect Engn, Chengdu 610065, Peoples R China
关键词
ALGORITHM;
D O I
10.1155/2022/6825902
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Aiming at the problems of security, high repetition rate, and many restrictions of multirobot coverage path planning (MCPP) in an unknown environment, Deep Q-Network (DQN) is selected as a part of the method in this paper after considering its powerful approximation ability to the optimal action value function. Then, a deduction method and some environments handling methods are proposed to improve the performance of the decision-making stage. The deduction method assumes the movement direction of each robot and counts the reward value obtained by the robots in this way and then determines the actual movement directions combined with DQN. For these reasons, the whole algorithm is divided into two parts: offline training and online decision-making. Online decision-making relies on the sliding-view method and probability statistics to deal with the nonstandard size and unknown environments and the deduction method to improve the efficiency of coverage. Simulation results show that the performance of the proposed online method is close to that of the offline algorithm which needs long time optimization, and the proposed method is more stable as well. Some performance defects of current MCPP methods in an unknown environment are ameliorated in this study.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Visual Analysis of Deep Q-network
    Seng, Dewen
    Zhang, Jiaming
    Shi, Xiaoying
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (03): : 853 - 873
  • [32] A Continuous Space Path Planning Method for Unmanned Aerial Vehicle Based on Particle Swarm Optimization-Enhanced Deep Q-Network
    Han, Le
    Zhang, Hui
    An, Nan
    DRONES, 2025, 9 (02)
  • [33] Stochastic Double Deep Q-Network
    Lv, Pingli
    Wang, Xuesong
    Cheng, Yuhu
    Duan, Ziming
    IEEE ACCESS, 2019, 7 : 79446 - 79454
  • [34] Indoor Positioning by Double Deep Q-Network in VLC-Based Empty Office Environment
    Oh, Sung Hyun
    Kim, Jeong Gon
    APPLIED SCIENCES-BASEL, 2024, 14 (09):
  • [35] Path Planning for Autonomous Vehicles in Unknown Dynamic Environment Based on Deep Reinforcement Learning
    Hu, Hui
    Wang, Yuge
    Tong, Wenjie
    Zhao, Jiao
    Gu, Yulei
    APPLIED SCIENCES-BASEL, 2023, 13 (18):
  • [36] A Novel Path Planning Approach for Mobile Robot in Radioactive Environment Based on Improved Deep Q Network Algorithm
    Wu, Zhiqiang
    Yin, Yebo
    Liu, Jie
    Zhang, De
    Chen, Jie
    Jiang, Wei
    SYMMETRY-BASEL, 2023, 15 (11):
  • [37] Dynamic constrained evolutionary optimization based on deep Q-network
    Liang, Zhengping
    Yang, Ruitai
    Wang, Jigang
    Liu, Ling
    Ma, Xiaoliang
    Zhu, Zexuan
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [38] A New Feature Selection Algorithm Based on Deep Q-Network
    Li, Xinqian
    Yao, Jie
    Ren, Jia
    Wang, Liqiang
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 7100 - 7105
  • [39] Manipulation-Compliant Artificial Potential Field and Deep Q-Network: Large Ships Path Planning Based on Deep Reinforcement Learning and Artificial Potential Field
    Xu, Weifeng
    Zhu, Xiang
    Gao, Xiaori
    Li, Xiaoyong
    Cao, Jianping
    Ren, Xiaoli
    Shao, Chengcheng
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2024, 12 (08)
  • [40] Attention-Based Deep Q-Network in Complex Systems
    Ni, Kun
    Yu, Danning
    Liu, Yunlong
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT IV, 2019, 1142 : 323 - 332