Optimizing active surveillance for prostate cancer using partially observable Markov decision processes

被引:5
|
作者
Li, Weiyu [1 ]
Denton, Brian T. [1 ]
Morgan, Todd M. [2 ]
机构
[1] Univ Michigan, Dept Ind & Operat Engn, 1205 Beal Ave, Ann Arbor, MI 48109 USA
[2] Univ Michigan, Med Ctr 1500 E, Dept Urol, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
OR in medicine; Decision process; Medical decision making; Partially observable Markov decision; process; Prostate cancer; VALUE-ITERATION; BIOPSY; OPTIMIZATION; STRATEGIES; PATIENT; SYSTEM;
D O I
10.1016/j.ejor.2022.05.043
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
We describe a finite-horizon partially observable Markov decision process (POMDP) approach to optimize decisions about whether and when to perform biopsies for patients on active surveillance for prostate cancer. The objective is to minimize a weighted combination of two criteria, the number of biopsies to conduct over a patient's lifetime and the delay in detecting high-risk cancer that warrants more aggres-sive treatment. Our study also considers the impact of parameter ambiguity caused by variation across models fitted to different clinical studies and variation in the weights attributed to the reward crite-ria according to patients' preferences. We introduce two fast approximation algorithms for the proposed model and describe some properties of the optimal policy, including the existence of a control-limit type policy. The numerical results show that our approximations perform well, and we use them to compare the model-based biopsy policies to published guidelines. Although our focus is on prostate cancer active surveillance, there are lessons to be learned for applications to other chronic diseases.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:386 / 399
页数:14
相关论文
共 50 条
  • [21] Transition Entropy in Partially Observable Markov Decision Processes
    Melo, Francisco S.
    Ribeiro, Isabel
    [J]. INTELLIGENT AUTONOMOUS SYSTEMS 9, 2006, : 282 - +
  • [22] On Anderson Acceleration for Partially Observable Markov Decision Processes
    Ermis, Melike
    Park, Mingyu
    Yang, Insoon
    [J]. 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 4478 - 4485
  • [23] Partially Observable Markov Decision Processes in Robotics: A Survey
    Lauri, Mikko
    Hsu, David
    Pajarinen, Joni
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (01) : 21 - 40
  • [24] A primer on partially observable Markov decision processes (POMDPs)
    Chades, Iadine
    Pascal, Luz V.
    Nicol, Sam
    Fletcher, Cameron S.
    Ferrer-Mestres, Jonathan
    [J]. METHODS IN ECOLOGY AND EVOLUTION, 2021, 12 (11): : 2058 - 2072
  • [25] Minimal Disclosure in Partially Observable Markov Decision Processes
    Bertrand, Nathalie
    Genest, Blaise
    [J]. IARCS ANNUAL CONFERENCE ON FOUNDATIONS OF SOFTWARE TECHNOLOGY AND THEORETICAL COMPUTER SCIENCE (FSTTCS 2011), 2011, 13 : 411 - 422
  • [26] Partially observable Markov decision processes with imprecise parameters
    Itoh, Hideaki
    Nakamura, Kiyohiko
    [J]. ARTIFICIAL INTELLIGENCE, 2007, 171 (8-9) : 453 - 490
  • [27] Nonapproximability results for partially observable Markov decision processes
    Lusena, Cristopher
    Goldsmith, Judy
    Mundhenk, Martin
    [J]. 1600, Morgan Kaufmann Publishers (14):
  • [28] Optimizing Spatial and Temporal Reuse in Wireless Networks by Decentralized Partially Observable Markov Decision Processes
    Pajarinen, Joni
    Hottinen, Ari
    Peltonen, Jaakko
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2014, 13 (04) : 866 - 879
  • [29] THE PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES FRAMEWORK IN MEDICAL DECISION MAKING
    Goulionis, John E.
    Stengos, Dimitrios I.
    [J]. ADVANCES AND APPLICATIONS IN STATISTICS, 2008, 9 (02) : 205 - 232
  • [30] Trainbot: a Spoken Dialog Sytem Using Partially Observable Markov Decision Processes
    Zhou, Weidong
    Yuan, Baozong
    [J]. ICWMMN 2010, PROCEEDINGS, 2010, : 381 - 384