Curricular Subgoals for Inverse Reinforcement Learning

被引:0
|
作者
Liu, Shunyu [1 ]
Qing, Yunpeng [2 ]
Xu, Shuqi [3 ]
Wu, Hongyan [4 ]
Zhang, Jiangtao [4 ]
Cong, Jingyuan [2 ]
Chen, Tianhao [4 ]
Liu, Yun-Fu
Song, Mingli [1 ,5 ]
机构
[1] Zhejiang Univ, State Key Lab Blockchain & Data Secur, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[3] Alibaba Grp, Hangzhou 310027, Peoples R China
[4] Zhejiang Univ, Coll Software Technol, Hangzhou 310027, Peoples R China
[5] Hangzhou High Tech Zone Binjiang, Inst Blockchain & Data Secur, Hangzhou 310051, Peoples R China
关键词
Curricular subgoals; inverse reinforcement learning; reward function;
D O I
10.1109/TITS.2025.3532519
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Inverse Reinforcement Learning (IRL) aims to reconstruct the reward function from expert demonstrations to facilitate policy learning, and has demonstrated its remarkable success in imitation learning. To promote expert-like behavior, existing IRL methods mainly focus on learning global reward functions to minimize the trajectory difference between the imitator and the expert. However, these global designs are still limited by the redundant noise and error propagation problems, leading to the unsuitable reward assignment and thus downgrading the agent capability in complex multi-stage tasks. In this paper, we propose a novel Curricular Subgoal-based Inverse Reinforcement Learning (CSIRL) framework, that explicitly disentangles one task with several local subgoals to guide agent imitation. Specifically, CSIRL firstly introduces decision uncertainty of the trained agent over expert trajectories to dynamically select specific states as subgoals, which directly determines the exploration boundary of different task stages. To further acquire local reward functions for each stage, we customize a meta-imitation objective based on these curricular subgoals to train an intrinsic reward generator. Experiments on the D4RL and autonomous driving benchmarks demonstrate that the proposed methods yields results superior to the state-of-the-art counterparts, as well as better interpretability. Our code is publicly available at https://github.com/Plankson/CSIRL.
引用
收藏
页码:3016 / 3027
页数:12
相关论文
共 50 条
  • [31] MASER: Multi-Agent Reinforcement Learning with Subgoals Generated from Experience Replay Buffer
    Jeon, Jeewon
    Kim, Woojun
    Jung, Whiyoung
    Sung, Youngchul
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 10041 - 10052
  • [32] COMBINATIONS OF MICRO-MACRO STATES AND SUBGOALS DISCOVERY IN HIERARCHICAL REINFORCEMENT LEARNING FOR PATH FINDING
    Setyawan, Gembong Edhi
    Sawada, Hideyuki
    Hartono, Pitoyo
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2022, 18 (02): : 447 - 462
  • [33] Hierarchical Reinforcement Learning-Based End-to-End Visual Servoing With Smooth Subgoals
    He, Yaozhen
    Gao, Jian
    Li, Huiping
    Chen, Yimin
    Li, Yufeng
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2024, 71 (09) : 11009 - 11018
  • [34] Inverse Reinforcement Learning for Text Summarization
    Fu, Yu
    Xiong, Deyi
    Dong, Yue
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 6559 - 6570
  • [35] Reward Identification in Inverse Reinforcement Learning
    Kim, Kuno
    Garg, Shivam
    Shiragur, Kirankumar
    Ermon, Stefano
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [36] Compatible Reward Inverse Reinforcement Learning
    Metelli, Alberto Maria
    Pirotta, Matteo
    Restelli, Marcello
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [37] Training parsers by inverse reinforcement learning
    Gergely Neu
    Csaba Szepesvári
    Machine Learning, 2009, 77 : 303 - 337
  • [38] Inverse Reinforcement Learning with Gaussian Process
    Qiao, Qifeng
    Beling, Peter A.
    2011 AMERICAN CONTROL CONFERENCE, 2011, : 113 - 118
  • [39] Active Exploration for Inverse Reinforcement Learning
    Lindner, David
    Krause, Andreas
    Ramponi, Giorgia
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [40] Recent Advancements in Inverse Reinforcement Learning
    Metelli, Alberto Maria
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20, 2024, : 22680 - 22680