Improved Q-Learning Applied to Dynamic Obstacle Avoidance and Path Planning

被引:4
|
作者
Wang, Chunlei [1 ]
Yang, Xiao [2 ]
Li, He [3 ]
机构
[1] China Univ Min & Technol, Sch Publ Adm, Xuzhou 221116, Jiangsu, Peoples R China
[2] Harbin Engn Univ, Sch Comp Sci & Technol, Harbin 150001, Heilongjiang, Peoples R China
[3] Beijing Jiaotong Univ, Dept Languages & Commun Studies, Beijing 100044, Haidian, Peoples R China
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Path planning; Heuristic algorithms; Robots; Q-learning; Planning; Optimization; Genetic algorithms; Dynamic obstacle avoidance; sequence problems; reinforcement learning; Q-learning algorithm; NAVIGATION;
D O I
10.1109/ACCESS.2022.3203072
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the complexity of interactive environments, dynamic obstacle avoidance path planning poses a significant challenge to agent mobility. Dynamic path planning is a complex multi-constraint combinatorial optimization problem. Some existing algorithms easily fall into local optimization when solving such problems, leading to defects in convergence speed and accuracy. Reinforcement learning has certain advantages in solving decision sequence problems in complex environments. A Q-learning algorithm is a reinforcement learning method. In order to improve the value evaluation of the algorithm in solving practical problems, this paper introduces the priority weight into the Q-learning algorithm. The improved algorithm is compared with existing algorithms and applied to dynamic obstacle avoidance path planning. Experiments show that the improved algorithm dramatically improves the convergence speed and accuracy and increases the value evaluation. The improved algorithm finds the shortest path of 16 units in 27 seconds.
引用
收藏
页码:92879 / 92888
页数:10
相关论文
共 50 条
  • [1] A dynamic reward-enhanced Q-learning approach for efficient path planning and obstacle avoidance in mobile robotics
    Gharbi, Atef
    [J]. APPLIED COMPUTING AND INFORMATICS, 2024,
  • [2] Dynamic Path Planning of a Mobile Robot with Improved Q-Learning algorithm
    Li, Siding
    Xu, Xin
    Zuo, Lei
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION, 2015, : 409 - 414
  • [3] Improved DQN for Dynamic Obstacle Avoidance and Ship Path Planning
    Yang, Xiao
    Han, Qilong
    [J]. ALGORITHMS, 2023, 16 (05)
  • [4] Application of Improved Q-Learning Algorithm in Dynamic Path Planning for Aircraft at Airports
    Xiang, Zheng
    Sun, Heyang
    Zhang, Jiahao
    [J]. IEEE ACCESS, 2023, 11 : 107892 - 107905
  • [5] Dynamic Obstacle Avoidance Path Planning
    Su, Shun-Feng
    Chen, Ming-Chang
    Li, Chung-Ying
    Wang, Wei-Yen
    Wang, Wen-June
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON SYSTEM SCIENCE AND ENGINEERING (ICSSE), 2014, : 40 - 43
  • [6] Dynamic Obstacle Avoidance and Path Planning through Reinforcement Learning
    Almazrouei, Khawla
    Kamel, Ibrahim
    Rabie, Tamer
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (14):
  • [7] Q-learning-based unmanned aerial vehicle path planning with dynamic obstacle avoidance
    Sonny, Amala
    Yeduri, Sreenivasa Reddy
    Cenkeramaddi, Linga Reddy
    [J]. APPLIED SOFT COMPUTING, 2023, 147
  • [8] Dynamic obstacle avoidance path planning of UAV Based on improved APF
    Li Keyu
    Lu Yonggen
    Zhang Yanchi
    [J]. 2020 5TH INTERNATIONAL CONFERENCE ON COMMUNICATION, IMAGE AND SIGNAL PROCESSING (CCISP 2020), 2020, : 159 - 163
  • [9] ETQ-learning: an improved Q-learning algorithm for path planning
    Wang, Huanwei
    Jing, Jing
    Wang, Qianlv
    He, Hongqi
    Qi, Xuyan
    Lou, Rui
    [J]. INTELLIGENT SERVICE ROBOTICS, 2024, 17 (04) : 915 - 929
  • [10] A deterministic improved Q-learning for path planning of a mobile robot
    [J]. 1600, Institute of Electrical and Electronics Engineers Inc. (43):