Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits

被引:11
|
作者
Zhan, Ruohan [1 ]
Hadad, Vitor [1 ]
Hirshberg, David A. [1 ]
Athey, Susan [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
关键词
contextual bandits; off-policy evaluation; adaptive weighting; variance reduction;
D O I
10.1145/3447548.3467456
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It has become increasingly common for data to be collected adaptively, for example using contextual bandits. Historical data of this type can be used to evaluate other treatment assignment policies to guide future innovation or experiments. However, policy evaluation is challenging if the target policy differs from the one used to collect data, and popular estimators, including doubly robust (DR) estimators, can be plagued by bias, excessive variance, or both. In particular, when the pattern of treatment assignment in the collected data looks little like the pattern generated by the policy to be evaluated, the importance weights used in DR estimators explode, leading to excessive variance. In this paper, we improve the DR estimator by adaptively weighting observations to control its variance. We show that a t-statistic based on our improved estimator is asymptotically normal under certain conditions, allowing us to form confidence intervals and test hypotheses. Using synthetic data and public benchmarks, we provide empirical evidence for our estimator's improved accuracy and inferential properties relative to existing alternatives.
引用
收藏
页码:2125 / 2135
页数:11
相关论文
共 50 条
  • [41] Debiased Off-Policy Evaluation for Recommendation Systems
    Narita, Yusuke
    Yasui, Shota
    Yata, Kohei
    15TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS 2021), 2021, : 372 - 379
  • [42] Off-Policy Evaluation in Partially Observable Environments
    Tennenholtz, Guy
    Mannor, Shie
    Shalit, Uri
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 10276 - 10283
  • [43] On the Design of Estimators for Bandit Off-Policy Evaluation
    Vlassis, Nikos
    Bibaut, Aurelien
    Dimakopoulou, Maria
    Jebara, Tony
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [44] Off-Policy Evaluation with Policy-Dependent Optimization Response
    Guo, Wenshuo
    Jordan, Michael I.
    Zhou, Angela
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [45] Bounded Off-Policy Evaluation with Missing Data for Course Recommendation and Curriculum Design
    Hoiles, William
    van der Schaar, Mihaela
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [46] Asymptotically Unbiased Off-Policy Policy Evaluation when Reusing Old Data in Nonstationary Environments
    Liu, Vincent
    Chandak, Yash
    Thomas, Philip
    White, Martha
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [47] Identification of Subgroups With Similar Benefits in Off-Policy Policy Evaluation
    Keramati, Ramtin
    Gottesman, Omer
    Celi, Leo Anthony
    Doshi-Velez, Finale
    Brunskill, Emma
    CONFERENCE ON HEALTH, INFERENCE, AND LEARNING, VOL 174, 2022, 174 : 397 - 410
  • [48] Minimax Value Interval for Off-Policy Evaluation and Policy Optimization
    Jiang, Nan
    Huang, Jiawei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [49] Interpretable Off-Policy Learning via Hyperbox Search
    Tschernutter, Daniel
    Hatt, Tobias
    Feuerriegel, Stefan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [50] Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
    Tang, Yunhao
    Kozuno, Tadashi
    Rowland, Mark
    Munos, Remi
    Valko, Michal
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,