Curricular Subgoals for Inverse Reinforcement Learning

被引:0
|
作者
Liu, Shunyu [1 ]
Qing, Yunpeng [2 ]
Xu, Shuqi [3 ]
Wu, Hongyan [4 ]
Zhang, Jiangtao [4 ]
Cong, Jingyuan [2 ]
Chen, Tianhao [4 ]
Liu, Yun-Fu
Song, Mingli [1 ,5 ]
机构
[1] Zhejiang Univ, State Key Lab Blockchain & Data Secur, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[3] Alibaba Grp, Hangzhou 310027, Peoples R China
[4] Zhejiang Univ, Coll Software Technol, Hangzhou 310027, Peoples R China
[5] Hangzhou High Tech Zone Binjiang, Inst Blockchain & Data Secur, Hangzhou 310051, Peoples R China
关键词
Curricular subgoals; inverse reinforcement learning; reward function;
D O I
10.1109/TITS.2025.3532519
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Inverse Reinforcement Learning (IRL) aims to reconstruct the reward function from expert demonstrations to facilitate policy learning, and has demonstrated its remarkable success in imitation learning. To promote expert-like behavior, existing IRL methods mainly focus on learning global reward functions to minimize the trajectory difference between the imitator and the expert. However, these global designs are still limited by the redundant noise and error propagation problems, leading to the unsuitable reward assignment and thus downgrading the agent capability in complex multi-stage tasks. In this paper, we propose a novel Curricular Subgoal-based Inverse Reinforcement Learning (CSIRL) framework, that explicitly disentangles one task with several local subgoals to guide agent imitation. Specifically, CSIRL firstly introduces decision uncertainty of the trained agent over expert trajectories to dynamically select specific states as subgoals, which directly determines the exploration boundary of different task stages. To further acquire local reward functions for each stage, we customize a meta-imitation objective based on these curricular subgoals to train an intrinsic reward generator. Experiments on the D4RL and autonomous driving benchmarks demonstrate that the proposed methods yields results superior to the state-of-the-art counterparts, as well as better interpretability. Our code is publicly available at https://github.com/Plankson/CSIRL.
引用
收藏
页码:3016 / 3027
页数:12
相关论文
共 50 条
  • [1] Hierarchical Reinforcement Learning With Timed Subgoals
    Guertler, Nico
    Buechler, Dieter
    Martius, Georg
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Introduction and control of subgoals in reinforcement learning
    Murata, Junichi
    Abe, Yasuomi
    Ota, Keisuke
    PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND APPLICATIONS, 2007, : 329 - +
  • [3] A Method for Finding Multiple Subgoals for Reinforcement Learning
    Ogihara, Fuminori
    Murata, Junichi
    PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 16TH '11), 2011, : 804 - 807
  • [4] Autonomic discovery of subgoals in hierarchical reinforcement learning
    XIAO Ding
    LI Yi-tong
    SHI Chuan
    The Journal of China Universities of Posts and Telecommunications, 2014, (05) : 94 - 104
  • [5] Autonomic discovery of subgoals in hierarchical reinforcement learning
    XIAO Ding
    LI Yi-tong
    SHI Chuan
    TheJournalofChinaUniversitiesofPostsandTelecommunications, 2014, 21 (05) : 94 - 104
  • [6] Hierarchical reinforcement learning with subpolicies specializing for learned subgoals
    Bakker, B
    Schmidhuber, J
    Proceedings of the Second IASTED International Conference on Neural Networks and Computational Intelligence, 2004, : 125 - 130
  • [7] Goal-Conditioned Reinforcement Learning with Imagined Subgoals
    Chane-Sane, Elliot
    Schmid, Cordelia
    Laptev, Ivan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [8] Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning
    Zhang, Tianren
    Guo, Shangqi
    Tan, Tian
    Hu, Xiaolin
    Chen, Feng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [9] A Local Graph Clustering Algorithm for Discovering Subgoals in Reinforcement Learning
    Entezari, Negin
    Shiri, Mohammad Ebrahim
    Moradi, Parham
    COMMUNICATION AND NETWORKING, PT II, 2010, 120 : 41 - 50
  • [10] Generating adjacency-constrained subgoals in hierarchical reinforcement learning
    Zhang, Tianren
    Guo, Shangqi
    Tan, Tian
    Hu, Xiaolin
    Chen, Feng
    Advances in Neural Information Processing Systems, 2020, 2020-December