Dynamic Treatment Regimes with Replicated Observations Available for Error-Prone Covariates: A Q-Learning Approach

被引:0
|
作者
Liu, Dan [1 ]
He, Wenqing [1 ,2 ]
机构
[1] Univ Western Ontario, Dept Stat & Actuarial Sci, 1151 Richmond St, London, ON N6A 5B7, Canada
[2] Univ Western Ontario, Dept Oncol, 800 Commissioners Rd E, London, ON N6A 5W9, Canada
关键词
Covariate measurement error; Q-learning; Regression calibration; Replicate data; REGRESSION CALIBRATION; LOGISTIC-REGRESSION; INFERENCE; DEPRESSION;
D O I
10.1007/s12561-024-09471-4
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Dynamic treatment regimes (DTRs) have received increasing interests in recent years. DTRs are sequences of treatment decision rules tailored to patient-level information. The main goal of the DTR study is to identify an optimal DTR, a sequence of treatment decision rules that yields the best expected clinical outcome. Q-learning has been regarded as one of the most popular regression-based methods for estimating the optimal DTR. However, it has been rarely studied in an error-prone setting, where patient information is contaminated with measurement error. In this article, we shed light on the effect of covariate measurement error on Q-learning and propose an effective method to correct the error in Q-learning. Simulation studies are conducted to assess the performance of the proposed correction method in Q-learning. We illustrate the use of the proposed method in an application to the Sequenced Treatment Alternatives to Relieve Depression data.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] Q-learning for estimating optimal dynamic treatment rules from observational data
    Moodie, Erica E. M.
    Chakraborty, Bibhas
    Kramer, Michael S.
    CANADIAN JOURNAL OF STATISTICS-REVUE CANADIENNE DE STATISTIQUE, 2012, 40 (04): : 629 - 645
  • [22] A Bayesian Machine Learning Approach for Optimizing Dynamic Treatment Regimes
    Murray, Thomas A.
    Yuan, Ying
    Thall, Peter F.
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2018, 113 (523) : 1255 - 1267
  • [23] Dynamic Courier Capacity Acquisition in Rapid Delivery Systems: A Deep Q-Learning Approach
    Auad, Ramon
    Erera, Alan
    Savelsbergh, Martin
    TRANSPORTATION SCIENCE, 2024, 58 (01) : 67 - 93
  • [24] A novel dynamic integration approach for multiple load forecasts based on Q-learning algorithm
    Ma, Minhua
    Jin, Bingjie
    Luo, Shuxin
    Guo, Shaoqing
    Huang, Hongwei
    INTERNATIONAL TRANSACTIONS ON ELECTRICAL ENERGY SYSTEMS, 2020, 30 (07):
  • [25] A path planning approach for unmanned surface vehicles based on dynamic and fast Q-learning
    Hao, Bing
    Du, He
    Yan, Zheping
    OCEAN ENGINEERING, 2023, 270
  • [26] A Multiagent Dynamic Assessment Approach for Water Quality Based on Improved Q-Learning Algorithm
    Ni, Jianjun
    Ren, Li
    Liu, Minghua
    Zhu, Daqi
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2013, 2013
  • [27] DQDWA: Dynamic Weight Coefficients Based on Q-learning for Dynamic Window Approach Considering Environmental Situations
    Kobayashi, Masato
    Zushi, Hiroka
    Nakamura, Tomoaki
    Motoi, Naoki
    2023 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, AIM, 2023, : 1141 - 1146
  • [28] Early MTS Forecasting for Dynamic Stock Prediction: A Double Q-Learning Ensemble Approach
    Kumar, Santosh
    Alsamhi, Mohammed H.
    Kumar, Sunil
    Shvetsov, Alexey V.
    Alsamhi, Saeed Hamood
    IEEE ACCESS, 2024, 12 : 69796 - 69811
  • [29] Q- and A-Learning Methods for Estimating Optimal Dynamic Treatment Regimes
    Schulte, Phillip J.
    Tsiatis, Anastasios A.
    Laber, Eric B.
    Davidian, Marie
    STATISTICAL SCIENCE, 2014, 29 (04) : 640 - 661
  • [30] Integration of Q-learning and Behavior Network Approach with Hierarchical Task Network Planning for Dynamic Environments
    Sung, Yunsick
    Cho, Kyungeun
    Um, Kyhyun
    INFORMATION-AN INTERNATIONAL INTERDISCIPLINARY JOURNAL, 2012, 15 (05): : 2079 - 2090