PROJECTED STATE-ACTION BALANCING WEIGHTS FOR OFFLINE REINFORCEMENT LEARNING

被引:0
|
作者
Wang, Jiayi [1 ]
Qi, Zhengling [2 ]
Wong, Raymond K. W. [3 ]
机构
[1] Univ Texas Dallas, Dept Math Sci, Richardson, TX 75083 USA
[2] George Washington Univ, Dept Decis Sci, Washington, DC 20052 USA
[3] Texas A&M Univ, Dept Stat, College Stn, TX 77843 USA
来源
ANNALS OF STATISTICS | 2023年 / 51卷 / 04期
基金
美国国家科学基金会;
关键词
Infinite horizons; Markov decision process; Policy evaluation; Reinforcement learning; DYNAMIC TREATMENT REGIMES; RATES; CONVERGENCE; INFERENCE;
D O I
10.1214/23-AOS2302
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Off-policy evaluation is considered a fundamental and challenging problem in reinforcement learning (RL). This paper focuses on value estimation of a target policy based on pre-collected data generated from a possibly different policy, under the framework of infinite-horizon Markov decision processes. Motivated by the recently developed marginal importance sampling method in RL and the covariate balancing idea in causal inference, we propose a novel estimator with approximately projected state-action balancing weights for the policy value estimation. We obtain the convergence rate of these weights, and show that the proposed value estimator is asymptotically normal under technical conditions. In terms of asymptotics, our results scale with both the number of trajectories and the number of decision points at each trajectory. As such, consistency can still be achieved with a limited number of subjects when the number of decision points diverges. In addition, we develop a necessary and sufficient condition for establishing the well-posedness of the operator that relates to the nonparametric Q-function estimation in the off-policy setting, which characterizes the difficulty of Q-function estimation and may be of independent interest. Numerical experiments demonstrate the promising performance of our proposed estimator.
引用
收藏
页码:1639 / 1665
页数:27
相关论文
共 50 条
  • [21] Online Reinforcement Learning Control of Nonlinear Dynamic Systems: A State-action Value Function Based Solution
    Asl, Hamed Jabbari
    Uchibe, Eiji
    [J]. NEUROCOMPUTING, 2023, 544
  • [22] State Deviation Correction for Offline Reinforcement Learning
    Zhang, Hongchang
    Shao, Jianzhun
    Jiang, Yuhang
    He, Shuncheng
    Zhang, Guanwen
    Ji, Xiangyang
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9022 - 9030
  • [23] UAC: Offline Reinforcement Learning With Uncertain Action Constraint
    Guan, Jiayi
    Gu, Shangding
    Li, Zhijun
    Hou, Jing
    Yang, Yiqin
    Chen, Guang
    Jiang, Changjun
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (02) : 671 - 680
  • [24] Learning cooperative assembly with the graph representation of a state-action space
    Ferch, M
    Höchsmann, M
    Zhang, JW
    [J]. 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, 2002, : 990 - 995
  • [25] R-learning with multiple state-action value tables
    Ishikawa, Koichiro
    Sakurai, Akito
    Fujinami, Tsutomu
    Kunifuji, Susumu
    [J]. ELECTRICAL ENGINEERING IN JAPAN, 2007, 159 (03) : 34 - 47
  • [26] Learning cooperative grasping with the graph representation of a state-action space
    Ferch, M
    Zhang, JW
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2002, 38 (3-4) : 183 - 195
  • [27] Reinforcement learning in dynamic environment -Abstraction of state-action space utilizing properties of the robot body and environment-
    Takeuchi, Yutaka
    Ito, Kazuyuki
    [J]. PROCEEDINGS OF THE SEVENTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 17TH '12), 2012, : 938 - 942
  • [28] Learning Pseudometric-based Action Representations for Offline Reinforcement Learning
    Gu, Pengjie
    Zhao, Mengchen
    Chen, Chen
    Li, Dong
    Hao, Jianye
    An, Bo
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [29] Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning
    Luo, Jianlan
    Dong, Perry
    Wu, Jeffrey
    Kumar, Aviral
    Geng, Xinyang
    Levine, Sergey
    [J]. CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [30] Exploiting Action Impact Regularity and Exogenous State Variables for Offline Reinforcement Learning (Abstract Reprint)
    Liu, Vincent
    Wright, James R.
    White, Mrtha
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20, 2024, : 22706 - 22706