Rule-Based Reinforcement Learning for Efficient Robot Navigation With Space Reduction

被引:19
|
作者
Zhu, Yuanyang [1 ]
Wang, Zhi [1 ]
Chen, Chunlin [1 ,2 ]
Dong, Daoyi [3 ]
机构
[1] Nanjing Univ, Dept Control & Syst Engn, Sch Management & Engn, Nanjing 210093, Peoples R China
[2] Synergist Innovat Ctr Jiangsu Modern Agr Equipmen, Zhenjiang 212013, Jiangsu, Peoples R China
[3] Univ New South Wales, Sch Engn & Informat Technol, Canberra, ACT 2600, Australia
基金
中国国家自然科学基金;
关键词
Navigation; Robots; Trajectory; Space exploration; Mobile robots; Task analysis; Reinforcement learning; Hex-grid; robot navigation; rule-based reinforcement learning; space reduction; EXPLORATION;
D O I
10.1109/TMECH.2021.3072675
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For real-world deployments, it is critical to allow robots to navigate in complex environments autonomously. Traditional methods usually maintain an internal map of the environment, and then design several simple rules, in conjunction with a localization and planning approach, to navigate through the internal map. These approaches often involve a variety of assumptions and prior knowledge. In contrast, recent reinforcement learning (RL) methods can provide a model-free, self-learning mechanism as the robot interacts with an initially unknown environment, but are expensive to deploy in real-world scenarios due to inefficient exploration. In this article, we focus on efficient navigation with the RL technique and combine the advantages of these two kinds of methods into a rule-based RL (RuRL) algorithm for reducing the sample complexity and cost of time. First, we use the rule of wall-following to generate a closed-loop trajectory. Second, we employ a reduction rule to shrink the trajectory, which in turn effectively reduces the redundant exploration space. Besides, we give the detailed theoretical guarantee that the optimal navigation path is still in the reduced space. Third, in the reduced space, we utilize the Pledge rule to guide the exploration strategy for accelerating the RL process at the early stage. Experiments conducted on real robot navigation problems in hex-grid environments demonstrate that RuRL can achieve improved navigation performance.
引用
收藏
页码:846 / 857
页数:12
相关论文
共 50 条
  • [1] Robot Navigation Framework Based on Reinforcement Learning for Intelligent Space
    Jeni, Laszlo A.
    Istenes, Zoltan
    Szemes, Peter
    Hashimoto, Hideki
    [J]. 2008 CONFERENCE ON HUMAN SYSTEM INTERACTIONS, VOLS 1 AND 2, 2008, : 767 - +
  • [2] Persistent rule-based interactive reinforcement learning
    Bignold, Adam
    Cruz, Francisco
    Dazeley, Richard
    Vamplew, Peter
    Foale, Cameron
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (32): : 23411 - 23428
  • [3] Persistent rule-based interactive reinforcement learning
    Adam Bignold
    Francisco Cruz
    Richard Dazeley
    Peter Vamplew
    Cameron Foale
    [J]. Neural Computing and Applications, 2023, 35 : 23411 - 23428
  • [4] A rule-based fuzzy traversability index for mobile robot navigation
    Howard, A
    Seraji, H
    Tunstel, E
    [J]. 2001 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2001, : 3067 - 3071
  • [5] Rule-based adaptive navigation for an intelligent educational mobile robot
    Oprea, Mihaela M.
    [J]. ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, 2006, 204 : 35 - 43
  • [6] Combining statistical and reinforcement learning in rule-based classification
    Jorge Muruzábal
    [J]. Computational Statistics, 2001, 16 : 341 - 359
  • [7] Combining statistical and reinforcement learning in rule-based classification
    Muruzábal, J
    [J]. COMPUTATIONAL STATISTICS, 2001, 16 (03) : 341 - 359
  • [8] Reinforcement learning in a rule-based navigator for robotic manipulators
    Althoefer, K
    Krekelberg, B
    Husmeier, D
    Seneviratne, L
    [J]. NEUROCOMPUTING, 2001, 37 : 51 - 70
  • [9] Proxemics-based deep reinforcement learning for robot navigation in continuous action space
    Cimurs, Reinis
    Suh, Il-Hong
    [J]. Journal of Institute of Control, Robotics and Systems, 2020, 26 (03) : 168 - 176
  • [10] Learning to behave in space: A qualitative spatial representation for robot navigation with reinforcement learning
    Frommberger, Lutz
    [J]. INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2008, 17 (03) : 465 - 482