Null Space Based Efficient Reinforcement Learning with Hierarchical Safety Constraints

被引:0
|
作者
Yang, Quantao [1 ]
Stork, Johannes A. [1 ]
Stoyanov, Todor [1 ]
机构
[1] Orebro Univ, Autonomous Mobile Manipulat Lab AMM, Orebro, Sweden
关键词
D O I
10.1109/ECMR50962.2021.9568848
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning is inherently unsafe for use in physical systems, as learning by trial-and-error can cause harm to the environment or the robot itself. One way to avoid unpredictable exploration is to add constraints in the action space to restrict the robot behavior. In this paper, we propose a null space based framework of integrating reinforcement learning methods in constrained continuous action spaces. We leverage a hierarchical control framework to decompose target robotic skills into higher ranked tasks (e: g:, joint limits and obstacle avoidance) and lower ranked reinforcement learning task. Safe exploration is guaranteed by only learning policies in the null space of higher prioritized constraints. Meanwhile multiple constraint phases for different operational spaces are constructed to guide the robot exploration. Also, we add penalty loss for violating higher ranked constraints to accelerate the learning procedure. We have evaluated our method on different redundant robotic tasks in simulation and show that our null space based reinforcement learning method can explore and learn safely and efficiently.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Reinforcement Learning of Space Robotic Manipulation with Multiple Safety Constraints
    Li, Linfeng
    Xie, Yongchun
    Wang, Yong
    Chen, Ao
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7367 - 7372
  • [2] Hierarchical Reinforcement Learning Based on Continuous Subgoal Space
    Wang, Chen
    Zeng, Fanyu
    Ge, Shuzhi Sam
    Jiang, Xin
    2020 IEEE INTERNATIONAL CONFERENCE ON REAL-TIME COMPUTING AND ROBOTICS (IEEE-RCAR 2020), 2020, : 74 - 80
  • [3] Reconnaissance for Reinforcement Learning with Safety Constraints
    Maeda, Shin-ichi
    Watahiki, Hayato
    Ouyang, Yi
    Okada, Shintarou
    Koyama, Masanori
    Nagarajan, Prabhat
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT II, 2021, 12976 : 567 - 582
  • [4] An Efficient Approach to Model-Based Hierarchical Reinforcement Learning
    Li, Zhuoru
    Narayan, Akshay
    Leong, Tze-Yun
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3583 - 3589
  • [5] Efficient Hierarchical Reinforcement Learning for Mapless Navigation With Predictive Neighbouring Space Scoring
    Gao, Yan
    Wu, Jing
    Yang, Xintong
    Ji, Ze
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (04) : 5457 - 5472
  • [6] Hierarchical reinforcement learning algorithm based on structural state-space
    Meng, Jiang-Hua
    Zhu, Ji-Hong
    Sun, Zeng-Qi
    Kongzhi yu Juece/Control and Decision, 2007, 22 (02): : 233 - 237
  • [7] Latent Space Policies for Hierarchical Reinforcement Learning
    Haarnoja, Tuomas
    Hartikainen, Kristian
    Abbeel, Pieter
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [8] Sampling-based Inverse Reinforcement Learning Algorithms with Safety Constraints
    Fischer, Johannes
    Eyberg, Christoph
    Werling, Moritz
    Lauer, Martin
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 791 - 798
  • [9] Adjacency Constraint for Efficient Hierarchical Reinforcement Learning
    Zhang, Tianren
    Guo, Shangqi
    Tan, Tian
    Hu, Xiaolin
    Chen, Feng
    arXiv, 2021,
  • [10] Adjacency Constraint for Efficient Hierarchical Reinforcement Learning
    Zhang, Tianren
    Guo, Shangqi
    Tan, Tian
    Hu, Xiaolin
    Chen, Feng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4152 - 4166