PROJECTED STATE-ACTION BALANCING WEIGHTS FOR OFFLINE REINFORCEMENT LEARNING

被引:1
|
作者
Wang, Jiayi [1 ]
Qi, Zhengling [2 ]
Wong, Raymond K. W. [3 ]
机构
[1] Univ Texas Dallas, Dept Math Sci, Richardson, TX 75083 USA
[2] George Washington Univ, Dept Decis Sci, Washington, DC 20052 USA
[3] Texas A&M Univ, Dept Stat, College Stn, TX 77843 USA
来源
ANNALS OF STATISTICS | 2023年 / 51卷 / 04期
基金
美国国家科学基金会;
关键词
Infinite horizons; Markov decision process; Policy evaluation; Reinforcement learning; DYNAMIC TREATMENT REGIMES; RATES; CONVERGENCE; INFERENCE;
D O I
10.1214/23-AOS2302
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Off-policy evaluation is considered a fundamental and challenging problem in reinforcement learning (RL). This paper focuses on value estimation of a target policy based on pre-collected data generated from a possibly different policy, under the framework of infinite-horizon Markov decision processes. Motivated by the recently developed marginal importance sampling method in RL and the covariate balancing idea in causal inference, we propose a novel estimator with approximately projected state-action balancing weights for the policy value estimation. We obtain the convergence rate of these weights, and show that the proposed value estimator is asymptotically normal under technical conditions. In terms of asymptotics, our results scale with both the number of trajectories and the number of decision points at each trajectory. As such, consistency can still be achieved with a limited number of subjects when the number of decision points diverges. In addition, we develop a necessary and sufficient condition for establishing the well-posedness of the operator that relates to the nonparametric Q-function estimation in the off-policy setting, which characterizes the difficulty of Q-function estimation and may be of independent interest. Numerical experiments demonstrate the promising performance of our proposed estimator.
引用
收藏
页码:1639 / 1665
页数:27
相关论文
共 50 条
  • [41] Scaling Up Q-Learning via Exploiting State-Action Equivalence
    Lyu, Yunlian
    Come, Aymeric
    Zhang, Yijie
    Talebi, Mohammad Sadegh
    ENTROPY, 2023, 25 (04)
  • [42] Automated Driving Highway Traffic Merging using Deep Multi-Agent Reinforcement Learning in Continuous State-Action Spaces
    Schester, Larry
    Ortiz, Luis E.
    2021 32ND IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2021, : 280 - 287
  • [43] Value Preserving State-Action Abstractions
    Abel, David
    Umbanhowar, Nate
    Khetarpal, Khimya
    Arumugam, Dilip
    Precup, Doina
    Littman, Michael
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 1639 - 1649
  • [44] Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare
    Tang, Shengpu
    Makar, Maggie
    Sjoding, Michael W.
    Doshi-Velez, Finale
    Wiens, Jenna
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [45] Using Memory-Based Learning to Solve Tasks with State-Action Constraints
    Verghese, Mrinal
    Atkeson, Christopher
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9558 - 9565
  • [46] SA-Net: Robust State-Action Recognition for Learning from Observations
    Soans, Nihal
    Asali, Ehsan
    Hong, Yi
    Doshi, Prashant
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 2153 - 2159
  • [47] Offline Reinforcement Learning with Pseudometric Learning
    Dadashi, Robert
    Rezaeifar, Shideh
    Vieillard, Nino
    Hussenot, Leonard
    Pietquin, Olivier
    Geist, Matthieu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [48] Balancing therapeutic effect and safety in ventilator parameter recommendation: An offline reinforcement learning approach
    Zhang, Bo
    Qiu, Xihe
    Tan, Xiaoyu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 131
  • [49] Balancing policy constraint and ensemble size in uncertainty-based offline reinforcement learning
    Beeson, Alex
    Montana, Giovanni
    MACHINE LEARNING, 2024, 113 (01) : 443 - 488
  • [50] Benchmarking Offline Reinforcement Learning
    Tittaferrante, Andrew
    Yassine, Abdulsalam
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 259 - 263