Uncertainty-Aware Instance Reweighting for Off-Policy Learning

被引:0
|
作者
Zhang, Xiaoying [1 ]
Chen, Junpu [2 ]
Wang, Hongning [3 ]
Xie, Hong [4 ]
Liu, Yang [1 ]
Lui, John C. S. [5 ]
Li, Hang [1 ]
机构
[1] ByteDance Res, Beijing, Peoples R China
[2] Chongqing Univ, Chongqing, Peoples R China
[3] Tsinghua Univ, Beijing, Peoples R China
[4] Chinese Acad Sci, Chongqing Inst Green & Intelligent Technol, Chongqing, Peoples R China
[5] Chinese Univ Hong Kong, Hong Kong, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Off-policy learning, referring to the procedure of policy optimization with access only to logged feedback data, has shown importance in various real-world applications, such as search engines and recommender systems. While the ground-truth logging policy is usually unknown, previous work simply employs its estimated value for the off-policy learning, ignoring the negative impact from both high bias and high variance resulted from such an estimator. And such impact is often magnified on samples with small and inaccurately estimated logging probabilities. The contribution of this work is to explicitly model the uncertainty in the estimated logging policy, and propose an Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning, with a theoretical convergence guarantee. Experiment results on the synthetic and real-world recommendation datasets demonstrate that UIPS significantly improves the quality of the discovered policy, when compared against an extensive list of state-of-the-art baselines.
引用
收藏
页数:28
相关论文
共 50 条
  • [21] Learning Routines for Effective Off-Policy Reinforcement Learning
    Cetin, Edoardo
    Celiktutan, Oya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [22] Safe and efficient off-policy reinforcement learning
    Munos, Remi
    Stepleton, Thomas
    Harutyunyan, Anna
    Bellemare, Marc G.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [23] Conditional Importance Sampling for Off-Policy Learning
    Rowland, Mark
    Harutyunyan, Anna
    van Hasselt, Hado
    Borsa, Diana
    Schaul, Tom
    Munos, Remi
    Dabney, Will
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 45 - 54
  • [24] Chaining Value Functions for Off-Policy Learning
    Schmitt, Simon
    Shawe-Taylor, John
    van Hasselt, Hado
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8187 - 8195
  • [25] Off-policy Learning With Eligibility Traces: A Survey
    Geist, Matthieu
    Scherrer, Bruno
    JOURNAL OF MACHINE LEARNING RESEARCH, 2014, 15 : 289 - 333
  • [26] Bounds for Off-policy Prediction in Reinforcement Learning
    Joseph, Ajin George
    Bhatnagar, Shalabh
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 3991 - 3997
  • [27] Off-Policy Imitation Learning from Observations
    Zhu, Zhuangdi
    Lin, Kaixiang
    Dai, Bo
    Zhou, Jiayu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [28] The Pitfalls of Regularization in Off-Policy TD Learning
    Manek, Gaurav
    Kolter, J. Zico
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [29] Off-Policy Learning-to-Bid with AuctionGym
    Jeunen, Olivier
    Murphy, Sean
    Allison, Ben
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 4219 - 4228
  • [30] Off-Policy Reinforcement Learning with Gaussian Processes
    Girish Chowdhary
    Miao Liu
    Robert Grande
    Thomas Walsh
    Jonathan How
    Lawrence Carin
    IEEE/CAAJournalofAutomaticaSinica, 2014, 1 (03) : 227 - 238