Semi-Markov Offline Reinforcement Learning for Healthcare

被引:0
|
作者
Fatemi, Mehdi [1 ]
Wu, Mary [2 ]
Petch, Jeremy [3 ]
Nelson, Walter [3 ]
Connolly, Stuart J. [4 ]
Benz, Alexander [4 ]
Carnicelli, Anthony [5 ]
Ghassemi, Marzyeh [6 ]
机构
[1] Microsoft Res, Redmond, WA 98052 USA
[2] Univ Toronto, Toronto, ON, Canada
[3] Hamilton Hlth Sci, Hamilton, ON, Canada
[4] Populat Hlth Res Inst, Hamilton, ON, Canada
[5] Duke Univ, Durham, NC 27706 USA
[6] MIT, Cambridge, MA 02139 USA
基金
加拿大自然科学与工程研究理事会;
关键词
WARFARIN;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning (RL) tasks are typically framed as Markov Decision Processes (MDPs), assuming that decisions are made at fixed time intervals. However, many applications of great importance, including healthcare, do not satisfy this assumption, yet they are commonly modelled as MDPs after an artificial reshaping of the data. In addition, most healthcare (and similar) problems are offline by nature, allowing for only retrospective studies. To address both challenges, we begin by discussing the Semi-MDP (SMDP) framework, which formally handles actions of variable timings. We next present a formal way to apply SMDP modifications to nearly any given value-based offline RL method. We use this theory to introduce three SMDP-based offline RL algorithms, namely, SDQN, SDDQN, and SBCQ. We then experimentally demonstrate that only these SMDP-based algorithms learn the optimal policy in variable-time environments, whereas their MDP counterparts do not. Finally, we apply our new algorithms to a real-world offline dataset pertaining to warfarin dosing for stroke prevention and demonstrate similar results.
引用
收藏
页码:119 / 137
页数:19
相关论文
共 50 条
  • [21] COMPARISON OF SEMI-MARKOV AND MARKOV PROCESSES
    KURTZ, TG
    ANNALS OF MATHEMATICAL STATISTICS, 1971, 42 (03): : 991 - &
  • [22] Optimal replacement of a system according to a semi-Markov decision process in a semi-Markov environment
    Hu, QY
    Yue, WY
    OPTIMIZATION METHODS & SOFTWARE, 2003, 18 (02): : 181 - 196
  • [23] Using Hidden Semi-Markov Model for Learning Behavior in Smarthomes
    Paris, Arnaud
    Arbaoui, Selma
    Cislo, Nathalie
    El-Amraoui, Adnen
    Ramdani, Nacim
    2015 INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2015, : 752 - 757
  • [24] BUNCHING IN A SEMI-MARKOV PROCESS
    HAWKES, AG
    JOURNAL OF APPLIED PROBABILITY, 1970, 7 (01) : 175 - &
  • [25] IMBEDDED SEMI-MARKOV PROCESSES
    BRODI, SM
    TEORIYA VEROYATNOSTEI I YEYE PRIMENIYA, 1975, 20 (02): : 450 - 452
  • [26] ON REVERSIBLE SEMI-MARKOV PROCESSES
    CHARI, MK
    OPERATIONS RESEARCH LETTERS, 1994, 15 (03) : 157 - 161
  • [27] SEMI-MARKOV PROCESSES AND MOBILITY
    GINSBERG, RB
    JOURNAL OF MATHEMATICAL SOCIOLOGY, 1971, 1 (02): : 233 - 262
  • [28] FUNCTIONS OF SEMI-MARKOV PROCESSES
    SERFOZO, RF
    SIAM JOURNAL ON APPLIED MATHEMATICS, 1971, 20 (03) : 530 - &
  • [29] Filtered Semi-Markov CRF
    Zaratiana, Urchade
    Tomeh, Nadi
    El Khbir, Niama
    Holat, Pierre
    Charnois, Thierry
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 222 - 235
  • [30] Reliability of semi-Markov missions
    Cekyay, Bora
    Ozekici, Suleyman
    STOCHASTIC MODELS, 2019, 35 (01) : 63 - 88