Path planning of UAV using guided enhancement Q-learning algorithm

被引:0
|
作者
Zhou, Bin [1 ]
Guo, Yan [1 ]
Li, Ning [1 ]
Zhong, Xijian [1 ]
机构
[1] College of Communications Engineering, Army Engineering University of PLA, Nanjing,210007, China
关键词
Motion planning;
D O I
10.7527/S1000-6893.2021.25109
中图分类号
学科分类号
摘要
With the increasing application of the Unmanned Aerial Vehicle (UAV) technology, the energy consumption and computing capacity of UAV are faced with bottleneck problems, so path planning of UAV is becoming increasingly important. In many cases, the UAV cannot obtain the exact location of the target point and environmental information in advance, and thus is difficult to plan an effective flight path. To solve this problem, this paper proposes a path planning method for UAV using the guided enhancement Q-learning algorithm. This method uses Receiving Signal Strength (RSS) to define the reward value, and continuously optimizes the path by using the Q-learning algorithm. The principle of guided reinforcement is proposed to accelerate the convergence speed of the Q learning algorithm. The simulation results show that the method proposed can realize autonomous navigation and fast path planning for UAV. Compared with the traditional algorithm, it can greatly reduce the number of iterations and obtain a shorter planned path. © 2021, Beihang University Aerospace Knowledge Press. All right reserved.
引用
收藏
相关论文
共 50 条
  • [31] Optimal path planning approach based on Q-learning algorithm for mobile robots
    Maoudj, Abderraouf
    Hentout, Abdelfetah
    [J]. APPLIED SOFT COMPUTING, 2020, 97
  • [32] Biologically Inspired Complete Coverage Path Planning Algorithm Based on Q-Learning
    Tan, Xiangquan
    Han, Linhui
    Gong, Hao
    Wu, Qingwen
    [J]. SENSORS, 2023, 23 (10)
  • [33] Application of Improved Q-Learning Algorithm in Dynamic Path Planning for Aircraft at Airports
    Xiang, Zheng
    Sun, Heyang
    Zhang, Jiahao
    [J]. IEEE ACCESS, 2023, 11 : 107892 - 107905
  • [34] Cooperative Path Planning for Single Leader Using Q-learning Method
    Zhang, Lichuan
    Wu, Dongwei
    Ren, Ranzhen
    Xing, Runfa
    [J]. GLOBAL OCEANS 2020: SINGAPORE - U.S. GULF COAST, 2020,
  • [35] Path Planning Using Wasserstein Distributionally Robust Deep Q-learning
    Alpturk, Cem
    Renganathan, Venkatraman
    [J]. 2023 EUROPEAN CONTROL CONFERENCE, ECC, 2023,
  • [36] An adaptive Q-learning based particle swarm optimization for multi-UAV path planning
    Li Tan
    Hongtao Zhang
    Yuzhao Liu
    Tianli Yuan
    Xujie Jiang
    Ziliang Shang
    [J]. Soft Computing, 2024, 28 (13-14) : 7931 - 7946
  • [37] The Experience-Memory Q-Learning Algorithm for Robot Path Planning in Unknown Environment
    Zhao, Meng
    Lu, Hui
    Yang, Siyi
    Guo, Fengjuan
    [J]. IEEE ACCESS, 2020, 8 : 47824 - 47844
  • [38] A modified Q-learning algorithm for robot path planning in a digital twin assembly system
    Guo, Xiaowei
    Peng, Gongzhuang
    Meng, Yingying
    [J]. INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2022, 119 (5-6): : 3951 - 3961
  • [39] Optimal path planning method based on epsilon-greedy Q-learning algorithm
    Bulut, Vahide
    [J]. JOURNAL OF THE BRAZILIAN SOCIETY OF MECHANICAL SCIENCES AND ENGINEERING, 2022, 44 (03)
  • [40] Optimal path planning method based on epsilon-greedy Q-learning algorithm
    Vahide Bulut
    [J]. Journal of the Brazilian Society of Mechanical Sciences and Engineering, 2022, 44