Addressing maximization bias in reinforcement learning with two-sample testing

被引:0
|
作者
Waltz, Martin [1 ]
Okhrin, Ostap [1 ,2 ]
机构
[1] Tech Univ Dresden, Chair Econometr & Stat, esp Transport Sect, D-01062 Dresden, Germany
[2] Ctr Scalable Data Analyt & Artificial Intelligence, Dresden Leipzig, Germany
关键词
Maximum expected value; Two-sample testing; Reinforcement learning; Q-learning; Estimation bias; ENVIRONMENT; LEVEL;
D O I
10.1016/j.artint.2024.104204
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Value-based reinforcement-learning algorithms have shown strong results in games, robotics, and other real-world applications. Overestimation bias is a known threat to those algorithms and can sometimes lead to dramatic performance decreases or even complete algorithmic failure. We frame the bias problem statistically and consider it an instance of estimating the maximum expected value (MEV) of a set of random variables. We propose the T-Estimator (TE) based on two-sample testing for the mean, that flexibly interpolates between over- and underestimation by adjusting the significance level of the underlying hypothesis tests. We also introduce a generalization, termed K-Estimator (KE), that obeys the same bias and variance bounds as the TE and relies on a nearly arbitrary kernel function. We introduce modifications of Q-Learning and the Bootstrapped Deep Q-Network (BDQN) using the TE and the KE, and prove convergence in the tabular setting. Furthermore, we propose an adaptive variant of the TE-based BDQN that dynamically adjusts the significance level to minimize the absolute estimation bias. All proposed estimators and algorithms are thoroughly tested and validated on diverse tasks and environments, illustrating the bias control and performance potential of the TE and KE.
引用
下载
收藏
页数:37
相关论文
共 50 条
  • [1] Two-sample Testing Using Deep Learning
    Kirchler, Matthias
    Khorasani, Shahryar
    Kloft, Marius
    Lippert, Christoph
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 1387 - 1397
  • [2] Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data
    Liu, Feng
    Xu, Wenkai
    Lu, Jie
    Sutherland, Danica J.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Learning to Rank Anomalies: Scalar Performance Criteria and Maximization of Two-Sample Rank Statistics
    Limnios, Myrto
    Noiry, Nathan
    Clemencon, Stephan
    THIRD INTERNATIONAL WORKSHOP ON LEARNING WITH IMBALANCED DOMAINS: THEORY AND APPLICATIONS, VOL 154, 2021, 154 : 63 - 75
  • [4] Addressing Sample Efficiency and Model-bias in Model-based Reinforcement Learning
    Anand, Akhil S.
    Kveen, Jens Erik
    Abu-Dakka, Fares
    Grotli, Esten Ingar
    Gravdahl, Jan Tommy
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1 - 6
  • [5] Bayesian Kernel Two-Sample Testing
    Zhang, Qinyi
    Wild, Veit
    Filippi, Sarah
    Flaxman, Seth
    Sejdinovic, Dino
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2022, 31 (04) : 1164 - 1176
  • [6] Nonparametric Two-Sample Testing by Betting
    Shekhar, Shubhanshu
    Ramdas, Aaditya
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2024, 70 (02) : 1178 - 1203
  • [7] Two-sample testing in high dimensions
    Stadler, Nicolas
    Mukherjee, Sach
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2017, 79 (01) : 225 - 246
  • [8] Testing variability in the two-sample case
    Ramsey, Philip H.
    Ramsey, Patricia P.
    COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2007, 36 (02) : 233 - 248
  • [9] Two-sample testing for random graphs
    Wen, Xiaoyi
    STATISTICAL ANALYSIS AND DATA MINING, 2024, 17 (04)
  • [10] Addressing Hindsight Bias in Multigoal Reinforcement Learning
    Bai, Chenjia
    Wang, Lingxiao
    Wang, Yixin
    Wang, Zhaoran
    Zhao, Rui
    Bai, Chenyao
    Liu, Peng
    IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (01) : 392 - 405