Learning optimal dynamic treatment regimes from longitudinal data

被引:0
|
作者
Williams, Nicholas T. [1 ]
Hoffman, Katherine L. [1 ]
Diaz, Ivan [2 ]
Rudolph, Kara E. [1 ]
机构
[1] Columbia Univ, Mailman Sch Publ Hlth, Dept Epidemiol, 722 W 168th St,Room 522, New York, NY 10032 USA
[2] NYU, Grossman Sch Med, Dept Populat Hlth Sci, Div Biostat, New York, NY 10016 USA
关键词
precision medicine; causal inference; optimal treatment rules; longitudinal studies; doubly robust methods; INDIVIDUALIZED TREATMENT RULES; BUPRENORPHINE-NALOXONE;
D O I
10.1093/aje/kwae122
中图分类号
R1 [预防医学、卫生学];
学科分类号
1004 ; 120402 ;
摘要
Investigators often report estimates of the average treatment effect (ATE). While the ATE summarizes the effect of a treatment on average, it does not provide any information about the effect of treatment within any individual. A treatment strategy that uses an individual's information to tailor treatment to maximize benefit is known as an optimal dynamic treatment rule (ODTR). Treatment, however, is typically not limited to a single point in time; consequently, learning an optimal rule for a time-varying treatment may involve not just learning the extent to which the comparative treatments' benefits vary across the characteristics of individuals, but also learning the extent to which the comparative treatments' benefits vary as relevant circumstances evolve within an individual. The goal of this paper is to provide a tutorial for estimating ODTR from longitudinal observational and clinical trial data for applied researchers. We describe an approach that uses a doubly robust unbiased transformation of the conditional ATE. We then learn a time-varying ODTR for when to increase buprenorphine-naloxone dose to minimize a return to regular opioid use among patients with opioid use disorder. Our analysis highlights the utility of ODTRs in the context of sequential decision-making: The learned ODTR outperforms a clinically defined strategy.This article is part of a Special Collection on Pharmacoepidemiology.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Near-Optimal Reinforcement Learning in Dynamic Treatment Regimes
    Zhang, Junzhe
    Bareinboim, Elias
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [2] Multicategory Angle-Based Learning for Estimating Optimal Dynamic Treatment Regimes With Censored Data
    Xue, Fei
    Zhang, Yanqing
    Zhou, Wenzhuo
    Fu, Haoda
    Qu, Annie
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2022, 117 (539) : 1438 - 1451
  • [4] Learning and Assessing Optimal Dynamic Treatment Regimes Through Cooperative Imitation Learning
    Shah, Syed Ihtesham Hussain
    Coronato, Antonio
    Naeem, Muddasar
    De Pietro, Giuseppe
    IEEE Access, 2022, 10 : 78148 - 78158
  • [5] Learning and Assessing Optimal Dynamic Treatment Regimes Through Cooperative Imitation Learning
    Shah, Syed Ihtesham Hussain
    Coronato, Antonio
    Naeem, Muddasar
    De Pietro, Giuseppe
    IEEE ACCESS, 2022, 10 : 78148 - 78158
  • [6] Accountable survival contrast-learning for optimal dynamic treatment regimes
    Choi, Taehwa
    Lee, Hyunjun
    Choi, Sangbum
    SCIENTIFIC REPORTS, 2023, 13 (01):
  • [7] New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes
    Zhao, Ying-Qi
    Zeng, Donglin
    Laber, Eric B.
    Kosorok, Michael R.
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2015, 110 (510) : 583 - 598
  • [8] Accountable survival contrast-learning for optimal dynamic treatment regimes
    Taehwa Choi
    Hyunjun Lee
    Sangbum Choi
    Scientific Reports, 13 (1)
  • [9] HIGH-DIMENSIONAL A-LEARNING FOR OPTIMAL DYNAMIC TREATMENT REGIMES
    Shi, Chengchun
    Fan, Ailin
    Song, Rui
    Lu, Wenbin
    ANNALS OF STATISTICS, 2018, 46 (03): : 925 - 957
  • [10] Demystifying optimal dynamic treatment regimes
    Moodie, Erica E. M.
    Richardson, Thomas S.
    Stephens, David A.
    BIOMETRICS, 2007, 63 (02) : 447 - 455