Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits

被引:11
|
作者
Zhan, Ruohan [1 ]
Hadad, Vitor [1 ]
Hirshberg, David A. [1 ]
Athey, Susan [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
关键词
contextual bandits; off-policy evaluation; adaptive weighting; variance reduction;
D O I
10.1145/3447548.3467456
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It has become increasingly common for data to be collected adaptively, for example using contextual bandits. Historical data of this type can be used to evaluate other treatment assignment policies to guide future innovation or experiments. However, policy evaluation is challenging if the target policy differs from the one used to collect data, and popular estimators, including doubly robust (DR) estimators, can be plagued by bias, excessive variance, or both. In particular, when the pattern of treatment assignment in the collected data looks little like the pattern generated by the policy to be evaluated, the importance weights used in DR estimators explode, leading to excessive variance. In this paper, we improve the DR estimator by adaptively weighting observations to control its variance. We show that a t-statistic based on our improved estimator is asymptotically normal under certain conditions, allowing us to form confidence intervals and test hypotheses. Using synthetic data and public benchmarks, we provide empirical evidence for our estimator's improved accuracy and inferential properties relative to existing alternatives.
引用
收藏
页码:2125 / 2135
页数:11
相关论文
共 50 条
  • [31] Stable Policy Optimization via Off-Policy Divergence Regularization
    Touati, Ahmed
    Zhang, Amy
    Pineau, Joelle
    Vincent, Pascal
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 1328 - 1337
  • [32] Learning Action Embeddings for Off-Policy Evaluation
    Cief, Matej
    Golebiowski, Jacek
    Schmidt, Philipp
    Abedjan, Ziawasch
    Bekasov, Artur
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT I, 2024, 14608 : 108 - 122
  • [33] A perspective on off-policy evaluation in reinforcement learning
    Li, Lihong
    FRONTIERS OF COMPUTER SCIENCE, 2019, 13 (05) : 911 - 912
  • [34] A perspective on off-policy evaluation in reinforcement learning
    Lihong Li
    Frontiers of Computer Science, 2019, 13 : 911 - 912
  • [35] Off-Policy Evaluation in Doubly Inhomogeneous Environments
    Bian, Zeyu
    Shi, Chengchun
    Qi, Zhengling
    Wang, Lan
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2024,
  • [36] Distributional Off-Policy Evaluation for Slate Recommendations
    Chaudhari, Shreyas
    Arbour, David
    Theocharous, Georgios
    Vlassis, Nikos
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 8265 - 8273
  • [37] Control Variates for Slate Off-Policy Evaluation
    Vlassis, Nikos
    Chandrashekar, Ashok
    Gil, Fernando Amat
    Kallus, Nathan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [38] Adaptive Trade-Offs in Off-Policy Learning
    Rowland, Mark
    Dabney, Will
    Munos, Remi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 34 - 43
  • [39] Reliable Off-Policy Evaluation for Reinforcement Learning
    Wang, Jie
    Gao, Rui
    Zha, Hongyuan
    OPERATIONS RESEARCH, 2024, 72 (02) : 699 - 716
  • [40] Handling Confounding for Realistic Off-Policy Evaluation
    Sohoney, Saurabh
    Prabhu, Nikita
    Chaoji, Vineet
    COMPANION PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2018 (WWW 2018), 2018, : 33 - 34