Inverse Reinforcement Learning in a Continuous State Space with Formal Guarantees

被引:0
|
作者
Dexter, Gregory [1 ]
Bello, Kevin [1 ]
Honorio, Jean [1 ]
机构
[1] Purdue Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inverse Reinforcement Learning (IRL) is the problem of finding a reward function which describes observed/known expert behavior. The IRL setting is remarkably useful for automated control, in situations where the reward function is difficult to specify manually or as a means to extract agent preference. In this work, we provide a new IRL algorithm for the continuous state space setting with unknown transition dynamics by modeling the system using a basis of orthonormal functions. Moreover, we provide a proof of correctness and formal guarantees on the sample and time complexity of our algorithm. Finally, we present synthetic experiments to corroborate our theoretical guarantees.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Autonomous blimp control using model-free reinforcement learning in a continuous state and action space
    Rottmann, Axel
    Plagemann, Christian
    Hilgers, Peter
    Burgard, Wolfram
    [J]. 2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9, 2007, : 1901 - +
  • [42] Reinforcement learning in discrete action space applied to inverse defect design
    Loeffler, Troy D.
    Banik, Suvo
    Patra, Tarak K.
    Sternberg, Michael
    Sankaranarayanan, Subramanian K. R. S.
    [J]. JOURNAL OF PHYSICS COMMUNICATIONS, 2021, 5 (03):
  • [43] Reduction of state space on reinforcement learning by sensor selection
    Kishima, Yasutaka
    Kurashige, Kentarou
    [J]. 2012 INTERNATIONAL SYMPOSIUM ON MICRO-NANOMECHATRONICS AND HUMAN SCIENCE (MHS), 2012, : 138 - 143
  • [44] Reinforcement Learning For Sepsis Treatment: A Continuous Action Space Solution
    Huang, Yong
    Cao, Rui
    Rahmani, Amir
    [J]. MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 182, 2022, 182 : 631 - 647
  • [45] Energy management of hybrid electric bus based on deep reinforcement learning in continuous state and action space
    Tan, Huachun
    Zhang, Hailong
    Peng, Jiankun
    Jiang, Zhuxi
    Wu, Yuankai
    [J]. ENERGY CONVERSION AND MANAGEMENT, 2019, 195 : 548 - 560
  • [46] Reinforcement Learning in Continuous State Space with Perceptual Aliasing by using Complex-valued RBF Network
    Shibuya, Takeshi
    Arita, Hideaki
    Hamagami, Tomoki
    [J]. IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010,
  • [47] Reduction of state space in reinforcement learning by sensor selection
    Kishima, Yasutaka
    Kurashige, Kentarou
    [J]. ARTIFICIAL LIFE AND ROBOTICS, 2013, 18 (1-2) : 7 - 14
  • [48] Constructivist Approach to State Space Adaptation in Reinforcement Learning
    Guerian, Maxime
    Cardozo, Nicolas
    Dusparic, Ivana
    [J]. 2019 IEEE 13TH INTERNATIONAL CONFERENCE ON SELF-ADAPTIVE AND SELF-ORGANIZING SYSTEMS (SASO), 2019, : 52 - 61
  • [49] Relevance Vector Sampling for Reinforcement Learning in Continuous Action Space
    Lee, Minwoo
    Anderson, Charles W.
    [J]. 2016 15TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2016), 2016, : 774 - 779
  • [50] Online reinforcement learning for a continuous space system with experimental validation
    Dogru, Oguzhan
    Wieczorek, Nathan
    Velswamy, Kirubakaran
    Ibrahim, Fadi
    Huang, Biao
    [J]. JOURNAL OF PROCESS CONTROL, 2021, 104 : 86 - 100