Curricular Subgoals for Inverse Reinforcement Learning

被引:0
|
作者
Liu, Shunyu [1 ]
Qing, Yunpeng [2 ]
Xu, Shuqi [3 ]
Wu, Hongyan [4 ]
Zhang, Jiangtao [4 ]
Cong, Jingyuan [2 ]
Chen, Tianhao [4 ]
Liu, Yun-Fu
Song, Mingli [1 ,5 ]
机构
[1] Zhejiang Univ, State Key Lab Blockchain & Data Secur, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[3] Alibaba Grp, Hangzhou 310027, Peoples R China
[4] Zhejiang Univ, Coll Software Technol, Hangzhou 310027, Peoples R China
[5] Hangzhou High Tech Zone Binjiang, Inst Blockchain & Data Secur, Hangzhou 310051, Peoples R China
关键词
Curricular subgoals; inverse reinforcement learning; reward function;
D O I
10.1109/TITS.2025.3532519
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Inverse Reinforcement Learning (IRL) aims to reconstruct the reward function from expert demonstrations to facilitate policy learning, and has demonstrated its remarkable success in imitation learning. To promote expert-like behavior, existing IRL methods mainly focus on learning global reward functions to minimize the trajectory difference between the imitator and the expert. However, these global designs are still limited by the redundant noise and error propagation problems, leading to the unsuitable reward assignment and thus downgrading the agent capability in complex multi-stage tasks. In this paper, we propose a novel Curricular Subgoal-based Inverse Reinforcement Learning (CSIRL) framework, that explicitly disentangles one task with several local subgoals to guide agent imitation. Specifically, CSIRL firstly introduces decision uncertainty of the trained agent over expert trajectories to dynamically select specific states as subgoals, which directly determines the exploration boundary of different task stages. To further acquire local reward functions for each stage, we customize a meta-imitation objective based on these curricular subgoals to train an intrinsic reward generator. Experiments on the D4RL and autonomous driving benchmarks demonstrate that the proposed methods yields results superior to the state-of-the-art counterparts, as well as better interpretability. Our code is publicly available at https://github.com/Plankson/CSIRL.
引用
收藏
页码:3016 / 3027
页数:12
相关论文
共 50 条
  • [21] Inverse reinforcement learning with evaluation
    da Silva, Valdinei Freire
    Reali Costa, Anna Helena
    Lima, Pedro
    2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-10, 2006, : 4246 - +
  • [22] Identifiability in inverse reinforcement learning
    Cao, Haoyang
    Cohen, Samuel N.
    Szpruch, Lukasz
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [23] A survey of inverse reinforcement learning
    Adams, Stephen
    Cody, Tyler
    Beling, Peter A.
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (06) : 4307 - 4346
  • [24] Survey on Inverse Reinforcement Learning
    Zhang L.-H.
    Liu Q.
    Huang Z.-G.
    Zhu F.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (10): : 4772 - 4803
  • [25] A survey of inverse reinforcement learning
    Stephen Adams
    Tyler Cody
    Peter A. Beling
    Artificial Intelligence Review, 2022, 55 : 4307 - 4346
  • [26] A Controllable Agent by Subgoals in Path Planning Using Goal-Conditioned Reinforcement Learning
    Lee, Gyeong Taek
    Kim, Kangjin
    IEEE ACCESS, 2023, 11 : 33812 - 33825
  • [27] Hierarchical reinforcement learning of low-dimensional subgoals and high-dimensional trajectories
    Morimoto, J
    Doya, K
    ICONIP'98: THE FIFTH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING JOINTLY WITH JNNS'98: THE 1998 ANNUAL CONFERENCE OF THE JAPANESE NEURAL NETWORK SOCIETY - PROCEEDINGS, VOLS 1-3, 1998, : 850 - 853
  • [28] Discovering Intrinsic Subgoals for Vision-and-Language Navigation via Hierarchical Reinforcement Learning
    Wang, Jiawei
    Wang, Teng
    Xu, Lele
    He, Zichen
    Sun, Changyin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 13
  • [29] Learning Behavior Styles with Inverse Reinforcement Learning
    Lee, Seong Jae
    popovic, Zoran
    ACM TRANSACTIONS ON GRAPHICS, 2010, 29 (04):
  • [30] Reinforcement Learning and Inverse Reinforcement Learning with System 1 and System 2
    Peysakhovich, Alexander
    AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, : 409 - 415