Efficient Hierarchical Reinforcement Learning for Mapless Navigation With Predictive Neighbouring Space Scoring

被引:1
|
作者
Gao, Yan [1 ]
Wu, Jing [2 ]
Yang, Xintong [1 ]
Ji, Ze [1 ]
机构
[1] Cardiff Univ, Sch Engn, Cardiff CF24 3AA, Wales
[2] Cardiff Univ, Sch Comp Sci Informat, Cardiff CF24 3AA, Wales
关键词
Mapless navigation; deep reinforcement learn-ing; collision avoidance; motion planning; hierarchical reinforce-ment learning;
D O I
10.1109/TASE.2023.3312237
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Solving reinforcement learning (RL)-based mapless navigation tasks is challenging due to their sparse reward and long decision horizon nature. Hierarchical reinforcement learning (HRL) has the ability to leverage knowledge at different abstract levels and is thus preferred in complex mapless navigation tasks. However, it is computationally expensive and inefficient to learn navigation end-to-end from raw high-dimensional sensor data, such as Lidar or RGB cameras. The use of subgoals based on a compact intermediate representation is therefore preferred for dimension reduction. This work proposes an efficient HRL-based framework to achieve this with a novel scoring method, named Predictive Neighbouring Space Scoring (PNSS). The PNSS model estimates the explorable space for a given position of interest based on the current robot observation. The PNSS values for a few candidate positions around the robot provide a compact and informative state representation for subgoal selection. We study the effects of different candidate position layouts and demonstrate that our layout design facilitates higher performances in longerrange tasks. Moreover, a penalty term is introduced in the reward function for the high-level (HL) policy, so that the subgoal selection process takes the performance of the lowlevel (LL) policy into consideration. Comprehensive evaluations demonstrate that using the proposed PNSS module consistently improves performances over the use of Lidar only or Lidar and encoded RGB features.
引用
收藏
页码:5457 / 5472
页数:16
相关论文
共 50 条
  • [41] Mapless Navigation of a Hybrid Aerial Underwater Vehicle with Deep Reinforcement Learning Through Environmental Generalization
    Grando, Ricardo B.
    de Jesus, Junior C.
    Kich, Victor A.
    Kolling, Alisson H.
    Pinheiro, Pedro M.
    Guerra, Rodrigo S.
    Drews, Paulo L. J.
    2022 LATIN AMERICAN ROBOTICS SYMPOSIUM (LARS), 2022 BRAZILIAN SYMPOSIUM ON ROBOTICS (SBR), AND 2022 WORKSHOP ON ROBOTICS IN EDUCATION (WRE), 2022, : 199 - 204
  • [42] Double Critic Deep Reinforcement Learning for Mapless 3D Navigation of Unmanned Aerial Vehicles
    Bedin Grando, Ricardo
    de Jesus, Junior Costa
    Kich, Victor Augusto
    Kolling, Alisson Henrique
    Jorge Drews-Jr, Paulo Lilles
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 104 (02)
  • [43] A Hierarchical Maze Navigation Algorithm with Reinforcement Learning and Mapping
    Mannucci, Tommaso
    van Kampen, Erik-Jan
    PROCEEDINGS OF 2016 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2016,
  • [44] Centralizing State-Values in Dueling Networks for Multi-Robot Reinforcement Learning Mapless Navigation
    Marchesini, Enrico
    Farinelli, Alessandro
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 4583 - 4588
  • [45] Latent Space Policies for Hierarchical Reinforcement Learning
    Haarnoja, Tuomas
    Hartikainen, Kristian
    Abbeel, Pieter
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [46] Double Critic Deep Reinforcement Learning for Mapless 3D Navigation of Unmanned Aerial Vehicles
    Ricardo Bedin Grando
    Junior Costa de Jesus
    Victor Augusto Kich
    Alisson Henrique Kolling
    Paulo Lilles Jorge Drews-Jr
    Journal of Intelligent & Robotic Systems, 2022, 104
  • [47] Deep Reinforcement Learning-Based Mapless Navigation for Mobile Robot in Unknown Environment With Local Optima
    Hu, Yiming
    Wang, Shuting
    Xie, Yuanlong
    Zheng, Shiqi
    Shi, Peng
    Rudas, Imre
    Cheng, Xiang
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (01): : 628 - 635
  • [48] Adjacency Constraint for Efficient Hierarchical Reinforcement Learning
    Zhang, Tianren
    Guo, Shangqi
    Tan, Tian
    Hu, Xiaolin
    Chen, Feng
    arXiv, 2021,
  • [49] Adjacency Constraint for Efficient Hierarchical Reinforcement Learning
    Zhang, Tianren
    Guo, Shangqi
    Tan, Tian
    Hu, Xiaolin
    Chen, Feng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4152 - 4166
  • [50] Data-Efficient Hierarchical Reinforcement Learning
    Nachum, Ofir
    Gu, Shixiang
    Lee, Honglak
    Levine, Sergey
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31