Misspecification in Inverse Reinforcement Learning

被引:0
|
作者
Skalse, Joar [1 ]
Abate, Alessandro [1 ]
机构
[1] Univ Oxford, Dept Comp Sci, Oxford, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The aim of Inverse Reinforcement Learning (IRL) is to infer a reward function R from a policy pi. To do this, we need a model of how p relates to R. In the current literature, the most common models are optimality, Boltzmann rationality, and causal entropy maximisation. One of the primary motivations behind IRL is to infer human preferences from human behaviour. However, the true relationship between human preferences and human behaviour is much more complex than any of the models currently used in IRL. This means that they are misspecified, which raises the worry that they might lead to unsound inferences if applied to real-world data. In this paper, we provide a mathematical analysis of how robust different IRL models are to misspecification, and answer precisely how the demonstrator policy may differ from each of the standard models before that model leads to faulty inferences about the reward function R. We also introduce a framework for reasoning about misspecification in IRL, together with formal tools that can be used to easily derive the misspecification robustness of new IRL models.
引用
收藏
页码:15136 / 15143
页数:8
相关论文
共 50 条
  • [1] Robust Inverse Constrained Reinforcement Learning under Model Misspecification
    Xu, Sheng
    Liu, Guiliang
    [J]. Proceedings of Machine Learning Research, 2024, 235 : 55162 - 55185
  • [2] Model-Based Offline Reinforcement Learning with Local Misspecification
    Dong, Kefan
    Flet-Berliac, Yannis
    Nie, Allen
    Brunskill, Emma
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7423 - 7431
  • [3] Repeated Inverse Reinforcement Learning
    Amin, Kareem
    Jiang, Nan
    Singh, Satinder
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [4] Inverse Constrained Reinforcement Learning
    Malik, Shehryar
    Anwar, Usman
    Aghasi, Alireza
    Ahmed, Ali
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [5] Bayesian Inverse Reinforcement Learning
    Ramachandran, Deepak
    Amir, Eyal
    [J]. 20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007, : 2586 - 2591
  • [6] Cooperative Inverse Reinforcement Learning
    Hadfield-Menell, Dylan
    Dragan, Anca
    Abbeel, Pieter
    Russell, Stuart
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [7] Lifelong Inverse Reinforcement Learning
    Mendez, Jorge A.
    Shivkumar, Shashank
    Eaton, Eric
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [8] Inverse reinforcement learning with evaluation
    da Silva, Valdinei Freire
    Reali Costa, Anna Helena
    Lima, Pedro
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-10, 2006, : 4246 - +
  • [9] Identifiability in inverse reinforcement learning
    Cao, Haoyang
    Cohen, Samuel N.
    Szpruch, Lukasz
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] Survey on Inverse Reinforcement Learning
    Zhang, Li-Hua
    Liu, Quan
    Huang, Zhi-Gang
    Zhu, Fei
    [J]. Ruan Jian Xue Bao/Journal of Software, 2023, 34 (10): : 4772 - 4803