Active Task-Inference-Guided Deep Inverse Reinforcement Learning

被引:0
|
作者
Memarian, Farzan [1 ]
Xu, Zhe [1 ]
Wu, Bo [1 ]
Wen, Min [3 ]
Topcu, Ufuk [1 ,2 ]
机构
[1] Univ Texas Austin, Oden Inst Computat Engn & Sci, Austin, TX 78712 USA
[2] Univ Texas Austin, Dept Aerosp Engn & Engn Mech, Austin, TX 78712 USA
[3] Google LLC, Mountain View, CA USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider the problem of reward learning for temporally extended tasks. For reward learning, inverse reinforcement learning (IRL) is a widely used paradigm. Given a Markov decision process (MDP) and a set of demonstrations for a task, IRL learns a reward function that assigns a real-valued reward to each state of the MDP. However, for temporally extended tasks, the underlying reward function may not be expressible as a function of individual states of the MDP. Instead, the history of visited states may need to be considered to determine the reward at the current state. To address this issue, we propose an iterative algorithm to learn a reward function for temporally extended tasks. At each iteration, the algorithm alternates between two modules, a task inference module that infers the underlying task structure and a reward learning module that uses the inferred task structure to learn a reward function. The task inference module produces a series of queries, where each query is a sequence of subgoals. The demonstrator provides a binary response to each query by attempting to execute it in the environment and observing the environment's feedback. After the queries are answered, the task inference module returns an automaton encoding its current hypothesis of the task structure. The reward learning module augments the state space of the MDP with the states of the automaton. The module then proceeds to learn a reward function over the augmented state space using a novel deep maximum entropy IRL algorithm. This iterative process continues until it learns a reward function with satisfactory performance. The experiments show that the proposed algorithm significantly outperforms several IRL baselines on temporally extended tasks.
引用
收藏
页码:1932 / 1938
页数:7
相关论文
共 50 条
  • [1] Symbolic Task Inference in Deep Reinforcement Learning
    Hasanbeig, Hosein
    Jeppu, Natasha Yogananda
    Abate, Alessandro
    Melham, Tom
    Kroening, Daniel
    [J]. Journal of Artificial Intelligence Research, 2024, 80 : 1099 - 1137
  • [2] Symbolic Task Inference in Deep Reinforcement Learning
    Hasanbeig, Hosein
    Jeppu, Natasha Yogananda
    Abate, Alessandro
    Melham, Tom
    Kroening, Daniel
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 80 : 1099 - 1137
  • [3] Spatiotemporal Costmap Inference for MPC Via Deep Inverse Reinforcement Learning
    Lee, Keuntaek
    Isele, David
    Theodorou, Evangelos A.
    Bae, Sangjae
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 3194 - 3201
  • [4] Reinforcement Learning or Active Inference?
    Friston, Karl J.
    Daunizeau, Jean
    Kiebel, Stefan J.
    [J]. PLOS ONE, 2009, 4 (07):
  • [5] Active Exploration for Inverse Reinforcement Learning
    Lindner, David
    Krause, Andreas
    Ramponi, Giorgia
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [6] Inverse reinforcement learning through logic constraint inference
    Mattijs Baert
    Sam Leroux
    Pieter Simoens
    [J]. Machine Learning, 2023, 112 : 2593 - 2618
  • [7] Inverse reinforcement learning through logic constraint inference
    Baert, Mattijs
    Leroux, Sam
    Simoens, Pieter
    [J]. MACHINE LEARNING, 2023, 112 (07) : 2593 - 2618
  • [8] Deep Reinforcement Learning-Guided Task Reverse Offloading in Vehicular Edge Computing
    Gu, Anqi
    Wu, Huaming
    Tang, Huijun
    Tang, Chaogang
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2200 - 2205
  • [9] Deep reinforcement learning with significant multiplications inference
    Ivanov, Dmitry A.
    Larionov, Denis A.
    Kiselev, Mikhail V.
    Dylov, Dmitry V.
    [J]. SCIENTIFIC REPORTS, 2023, 13 (01)
  • [10] Deep reinforcement learning with significant multiplications inference
    Dmitry A. Ivanov
    Denis A. Larionov
    Mikhail V. Kiselev
    Dmitry V. Dylov
    [J]. Scientific Reports, 13