Optimizing active surveillance for prostate cancer using partially observable Markov decision processes

被引:5
|
作者
Li, Weiyu [1 ]
Denton, Brian T. [1 ]
Morgan, Todd M. [2 ]
机构
[1] Univ Michigan, Dept Ind & Operat Engn, 1205 Beal Ave, Ann Arbor, MI 48109 USA
[2] Univ Michigan, Med Ctr 1500 E, Dept Urol, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
OR in medicine; Decision process; Medical decision making; Partially observable Markov decision; process; Prostate cancer; VALUE-ITERATION; BIOPSY; OPTIMIZATION; STRATEGIES; PATIENT; SYSTEM;
D O I
10.1016/j.ejor.2022.05.043
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
We describe a finite-horizon partially observable Markov decision process (POMDP) approach to optimize decisions about whether and when to perform biopsies for patients on active surveillance for prostate cancer. The objective is to minimize a weighted combination of two criteria, the number of biopsies to conduct over a patient's lifetime and the delay in detecting high-risk cancer that warrants more aggres-sive treatment. Our study also considers the impact of parameter ambiguity caused by variation across models fitted to different clinical studies and variation in the weights attributed to the reward crite-ria according to patients' preferences. We introduce two fast approximation algorithms for the proposed model and describe some properties of the optimal policy, including the existence of a control-limit type policy. The numerical results show that our approximations perform well, and we use them to compare the model-based biopsy policies to published guidelines. Although our focus is on prostate cancer active surveillance, there are lessons to be learned for applications to other chronic diseases.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:386 / 399
页数:14
相关论文
共 50 条
  • [1] Active learning in partially observable Markov decision processes
    Jaulmes, R
    Pineau, J
    Precup, D
    [J]. MACHINE LEARNING: ECML 2005, PROCEEDINGS, 2005, 3720 : 601 - 608
  • [2] Active Chemical Sensing With Partially Observable Markov Decision Processes
    Gosangi, Rakesh
    Gutierrez-Osuna, Ricardo
    [J]. OLFACTION AND ELECTRONIC NOSE, PROCEEDINGS, 2009, 1137 : 562 - 565
  • [3] Cost-Bounded Active Classification Using Partially Observable Markov Decision Processes
    Wu, Bo
    Ahmadi, Mohamadreza
    Bharadwaj, Suda
    Topcu, Ufuk
    [J]. 2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 1216 - 1223
  • [4] MEDICAL TREATMENTS USING PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES
    Goulionis, John E.
    [J]. JP JOURNAL OF BIOSTATISTICS, 2009, 3 (02) : 77 - 97
  • [5] Partially Observable Markov Decision Processes and Robotics
    Kurniawati, Hanna
    [J]. ANNUAL REVIEW OF CONTROL ROBOTICS AND AUTONOMOUS SYSTEMS, 2022, 5 : 253 - 277
  • [6] A tutorial on partially observable Markov decision processes
    Littman, Michael L.
    [J]. JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2009, 53 (03) : 119 - 125
  • [7] Quantum partially observable Markov decision processes
    Barry, Jennifer
    Barry, Daniel T.
    Aaronson, Scott
    [J]. PHYSICAL REVIEW A, 2014, 90 (03):
  • [8] Partially observable Markov decision model for the treatment of early Prostate Cancer
    Goulionis J.E.
    Koutsiumaris B.K.
    [J]. OPSEARCH, 2010, 47 (2) : 105 - 117
  • [9] PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES WITH PARTIALLY OBSERVABLE RANDOM DISCOUNT FACTORS
    Martinez-Garcia, E. Everardo
    Minjarez-Sosa, J. Adolfo
    Vega-Amaya, Oscar
    [J]. KYBERNETIKA, 2022, 58 (06) : 960 - 983
  • [10] Online Active Perception for Partially Observable Markov Decision Processes with Limited Budget
    Ghasemi, Mahsa
    Topcu, Ufuk
    [J]. 2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 6169 - 6174