Curricular Subgoals for Inverse Reinforcement Learning

被引:0
|
作者
Liu, Shunyu [1 ]
Qing, Yunpeng [2 ]
Xu, Shuqi [3 ]
Wu, Hongyan [4 ]
Zhang, Jiangtao [4 ]
Cong, Jingyuan [2 ]
Chen, Tianhao [4 ]
Liu, Yun-Fu
Song, Mingli [1 ,5 ]
机构
[1] Zhejiang Univ, State Key Lab Blockchain & Data Secur, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[3] Alibaba Grp, Hangzhou 310027, Peoples R China
[4] Zhejiang Univ, Coll Software Technol, Hangzhou 310027, Peoples R China
[5] Hangzhou High Tech Zone Binjiang, Inst Blockchain & Data Secur, Hangzhou 310051, Peoples R China
关键词
Curricular subgoals; inverse reinforcement learning; reward function;
D O I
10.1109/TITS.2025.3532519
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Inverse Reinforcement Learning (IRL) aims to reconstruct the reward function from expert demonstrations to facilitate policy learning, and has demonstrated its remarkable success in imitation learning. To promote expert-like behavior, existing IRL methods mainly focus on learning global reward functions to minimize the trajectory difference between the imitator and the expert. However, these global designs are still limited by the redundant noise and error propagation problems, leading to the unsuitable reward assignment and thus downgrading the agent capability in complex multi-stage tasks. In this paper, we propose a novel Curricular Subgoal-based Inverse Reinforcement Learning (CSIRL) framework, that explicitly disentangles one task with several local subgoals to guide agent imitation. Specifically, CSIRL firstly introduces decision uncertainty of the trained agent over expert trajectories to dynamically select specific states as subgoals, which directly determines the exploration boundary of different task stages. To further acquire local reward functions for each stage, we customize a meta-imitation objective based on these curricular subgoals to train an intrinsic reward generator. Experiments on the D4RL and autonomous driving benchmarks demonstrate that the proposed methods yields results superior to the state-of-the-art counterparts, as well as better interpretability. Our code is publicly available at https://github.com/Plankson/CSIRL.
引用
收藏
页码:3016 / 3027
页数:12
相关论文
共 50 条
  • [41] Multiagent Adversarial Inverse Reinforcement Learning
    Wei, Ermo
    Wicke, Drew
    Luke, Sean
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2265 - 2266
  • [42] Preference Elicitation and Inverse Reinforcement Learning
    Rothkopf, Constantin A.
    Dimitrakakis, Christos
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PT III, 2011, 6913 : 34 - 48
  • [43] Inverse Reinforcement Learning with Constraint Recovery
    Das, Nirjhar
    Chattopadhyay, Arpan
    PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2023, 2023, 14301 : 179 - 188
  • [44] Training parsers by inverse reinforcement learning
    Neu, Gergely
    Szepesvari, Csaba
    MACHINE LEARNING, 2009, 77 (2-3) : 303 - 337
  • [45] A survey of inverse reinforcement learning techniques
    Shao Zhifei
    Joo, Er Meng
    INTERNATIONAL JOURNAL OF INTELLIGENT COMPUTING AND CYBERNETICS, 2012, 5 (03) : 293 - 311
  • [46] Inverse Reinforcement Learning for Strategy Identification
    Rucker, Mark
    Adams, Stephen
    Hayes, Roy
    Beling, Peter A.
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 3067 - 3074
  • [47] Inverse reinforcement learning in contextual MDPs
    Belogolovsky, Stav
    Korsunsky, Philip
    Mannor, Shie
    Tessler, Chen
    Zahavy, Tom
    MACHINE LEARNING, 2021, 110 (09) : 2295 - 2334
  • [48] Hierarchical Bayesian Inverse Reinforcement Learning
    Choi, Jaedeug
    Kim, Kee-Eung
    IEEE TRANSACTIONS ON CYBERNETICS, 2015, 45 (04) : 793 - 805
  • [49] Inverse reinforcement learning in contextual MDPs
    Stav Belogolovsky
    Philip Korsunsky
    Shie Mannor
    Chen Tessler
    Tom Zahavy
    Machine Learning, 2021, 110 : 2295 - 2334
  • [50] Inverse Reinforcement Learning from Failure
    Shiarlis, Kyriacos
    Messias, Joao
    Whiteson, Shimon
    AAMAS'16: PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS, 2016, : 1060 - 1068