Speed Planning Based on Terrain-Aware Constraint Reinforcement Learning in Rugged Environments

被引:1
|
作者
Yang, Andong [1 ]
Li, Wei [1 ]
Hu, Yu [1 ]
机构
[1] Univ Chinese Acad Sci, Chinese Acad Sci, Inst Comp Technol, Res Ctr Intelligent Comp Syst, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Planning; Robots; Semantics; Data mining; Neural networks; Reinforcement learning; Mobile robots; Speed planning; mobile robot; rugged environments; reinforcement learning; MODEL-PREDICTIVE CONTROL; NAVIGATION;
D O I
10.1109/LRA.2024.3354629
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Speed planning in rugged terrain poses challenges due to various constraints, such as traverse efficiency, dynamics, safety, and smoothness. This letter introduces a framework based on Constrained Reinforcement Learning (CRL) that considers all these constraints. In addition, extracting the terrain information as a constraint to be added to the CRL is also a barrier. In this letter, a terrain constraint extraction module is designed to quantify the semantic and geometric attributes of the terrain by estimating maximum safe speed. All networks are trained on simulators or datasets and eventually deployed on a real mobile robot. To continuously improve the planning performance and mitigate the error caused by the simulator-reality gap, we propose a feedback structure for detecting and preserving critical experiences during the testing process. The experiments in the simulator and the real robot demonstrate that our method can reduce the frequency of dangerous status by 45% and improve up to 71% smoothness.
引用
收藏
页码:2096 / 2103
页数:8
相关论文
共 50 条
  • [31] Robot Navigation of Environments with Unknown Rough Terrain Using Deep Reinforcement Learning
    Zhang, Kaicheng
    Niroui, Farzad
    Ficocelli, Maurizio
    Nejat, Goldie
    2018 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2018,
  • [32] Learning Terrain-Aware Kinodynamic Model for Autonomous Off-Road Rally Driving With Model Predictive Path Integral Control
    Lee H.
    Kim T.
    Mun J.
    Lee W.
    IEEE Robotics and Automation Letters, 2023, 8 (11) : 7663 - 7670
  • [33] Constrained footstep planning using model-based reinforcement learning in virtual constraint-based walking
    Jin, Takanori
    Kobayashi, Taisuke
    Matsubara, Takamitsu
    ADVANCED ROBOTICS, 2024, 38 (08) : 525 - 545
  • [34] Reinforcement learning with constraint based on mirror descent algorithm
    Miyashita, Megumi
    Kondo, Toshiyuki
    Yano, Shiro
    RESULTS IN CONTROL AND OPTIMIZATION, 2021, 4
  • [35] Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint
    Chen, Dejun
    Zeng, Yunxiu
    Zhang, Yi
    Li, Shuilin
    Xu, Kai
    Yin, Quanjun
    MATHEMATICS, 2024, 12 (13)
  • [36] Node Constraint Routing Algorithm based on Reinforcement Learning
    Dong, Weihang
    Zhang, Wei
    Yang, Wei
    PROCEEDINGS OF 2016 IEEE 13TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP 2016), 2016, : 1752 - 1756
  • [37] Reinforcement Learning-based Adaptive Trajectory Planning for AUVs in Under-ice Environments
    Wang, Chaofeng
    Wei, Li
    Wang, Zhaohui
    Song, Min
    Mahmoudian, Nina
    OCEANS 2018 MTS/IEEE CHARLESTON, 2018,
  • [38] UAV Path Planning and Obstacle Avoidance Based on Reinforcement Learning in 3D Environments
    Tu, Guan-Ting
    Juang, Jih-Gau
    ACTUATORS, 2023, 12 (02)
  • [39] Optimal energy system scheduling using a constraint-aware reinforcement learning algorithm
    Shengren, Hou
    Vergara, Pedro P.
    Duque, Edgar Mauricio Salazar
    Palensky, Peter
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2023, 152
  • [40] All Aware Robot Navigation in Human Environments Using Deep Reinforcement Learning
    Lu, Xiaojun
    Faragasso, Angela
    Yamashita, Atsushi
    Asama, Hajime
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 5989 - 5996