Deep hierarchical reinforcement learning based formation planning for multiple unmanned surface vehicles with experimental results

被引:11
|
作者
Wei, Xiangwei [1 ]
Wang, Hao [1 ]
Tang, Yixuan [1 ]
机构
[1] Northeastern Univ, Fac Robot Sci & Engn, Shenyang 110819, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; Hierarchical reinforcement learning; Artificial potential field; Formation control; Unmanned surface vehicles; CONTROLLER;
D O I
10.1016/j.oceaneng.2023.115577
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
In this paper, a novel multi-USV formation path planning algorithm is proposed based on deep reinforcement learning. First, a goal-based hierarchical reinforcement learning algorithm is designed to improve training speed and resolve planning conflicts within the formation. Second, an improved artificial potential field algorithm is designed in the training process to obtain the optimal path planning and obstacle avoidance learning scheme for multi-USVs in the determined perceptual environment. Finally, a formation geometry model is established to describe the physical relationships among USVs, and a composite reward function is proposed to guide the training. Numerous simulation tests are conducted, and the effectiveness of the proposed algorithm are further validated through the NEU-MSV01 experimental platform with a combination of parameterized Line of Sight (LOS) guidance.
引用
下载
收藏
页数:9
相关论文
共 50 条
  • [41] Formation control scheme with reinforcement learning strategy for a group of multiple surface vehicles
    Nguyen, Khai
    Dang, Van Trong
    Pham, Dinh Duong
    Dao, Phuong Nam
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2024, 34 (03) : 2252 - 2279
  • [42] Cooperatively pursuing a target unmanned aerial vehicle by multiple unmanned aerial vehicles based on multiagent reinforcement learning
    Wang X.
    Xuan S.
    Ke L.
    Advanced Control for Applications: Engineering and Industrial Systems, 2020, 2 (02):
  • [43] Deep Reinforcement Learning Based Computation Offloading in Heterogeneous MEC Assisted by Ground Vehicles and Unmanned Aerial Vehicles
    He, Hang
    Ren, Tao
    Cui, Meng
    Liu, Dong
    Niu, Jianwei
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, PT III, 2022, 13473 : 481 - 494
  • [44] Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning
    You, Changxi
    Lu, Jianbo
    Filev, Dimitar
    Tsiotras, Panagiotis
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 114 : 1 - 18
  • [45] Cooperative Search Path Planning for Multiple Unmanned Surface Vehicles
    Zhao, Pengcheng
    Li, Jinming
    Mao, Zhaoyong
    Ding, Wenjun
    PROCEEDINGS OF 2022 INTERNATIONAL CONFERENCE ON AUTONOMOUS UNMANNED SYSTEMS, ICAUS 2022, 2023, 1010 : 3434 - 3445
  • [46] A Task Offloading Algorithm With Cloud Edge Jointly Load Balance Optimization Based on Deep Reinforcement Learning for Unmanned Surface Vehicles
    Yan, Linjie
    Chen, Haiming
    Tu, Youpeng
    Zhou, Xinyan
    IEEE ACCESS, 2022, 10 : 16566 - 16576
  • [47] Towards Using Reinforcement Learning for Autonomous Docking of Unmanned Surface Vehicles
    Holen, Martin
    Ruud, Else-Line Malene
    Warakagoda, Narada Dilp
    Goodwin, Morten
    Engelstad, Paal
    Knausgard, Kristian Muri
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EAAAI/EANN 2022, 2022, 1600 : 461 - 474
  • [48] Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles
    Aradi, Szilard
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) : 740 - 759
  • [49] An Algorithm of Complete Coverage Path Planning for Unmanned Surface Vehicle Based on Reinforcement Learning
    Xing, Bowen
    Wang, Xiao
    Yang, Liu
    Liu, Zhenchong
    Wu, Qingyun
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (03)
  • [50] Adaptive Formation Motion Planning and Control of Autonomous Underwater Vehicles Using Deep Reinforcement Learning
    Hadi, Behnaz
    Khosravi, Alireza
    Sarhadi, Pouria
    IEEE JOURNAL OF OCEANIC ENGINEERING, 2024, 49 (01) : 311 - 328