Semi-Markov Offline Reinforcement Learning for Healthcare

被引:0
|
作者
Fatemi, Mehdi [1 ]
Wu, Mary [2 ]
Petch, Jeremy [3 ]
Nelson, Walter [3 ]
Connolly, Stuart J. [4 ]
Benz, Alexander [4 ]
Carnicelli, Anthony [5 ]
Ghassemi, Marzyeh [6 ]
机构
[1] Microsoft Res, Redmond, WA 98052 USA
[2] Univ Toronto, Toronto, ON, Canada
[3] Hamilton Hlth Sci, Hamilton, ON, Canada
[4] Populat Hlth Res Inst, Hamilton, ON, Canada
[5] Duke Univ, Durham, NC 27706 USA
[6] MIT, Cambridge, MA 02139 USA
基金
加拿大自然科学与工程研究理事会;
关键词
WARFARIN;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning (RL) tasks are typically framed as Markov Decision Processes (MDPs), assuming that decisions are made at fixed time intervals. However, many applications of great importance, including healthcare, do not satisfy this assumption, yet they are commonly modelled as MDPs after an artificial reshaping of the data. In addition, most healthcare (and similar) problems are offline by nature, allowing for only retrospective studies. To address both challenges, we begin by discussing the Semi-MDP (SMDP) framework, which formally handles actions of variable timings. We next present a formal way to apply SMDP modifications to nearly any given value-based offline RL method. We use this theory to introduce three SMDP-based offline RL algorithms, namely, SDQN, SDDQN, and SBCQ. We then experimentally demonstrate that only these SMDP-based algorithms learn the optimal policy in variable-time environments, whereas their MDP counterparts do not. Finally, we apply our new algorithms to a real-world offline dataset pertaining to warfarin dosing for stroke prevention and demonstrate similar results.
引用
收藏
页码:119 / 137
页数:19
相关论文
共 50 条
  • [41] A MARKOV PROPERTY TEST FOR SEMI-MARKOV PROCESSES
    KHARLAMOV, BP
    THEORY OF PROBABILITY AND ITS APPLICATIONS, 1980, 25 (03) : 526 - 539
  • [42] Stability and Control of Fuzzy Semi-Markov Jump Systems Under Unknown Semi-Markov Kernel
    Ning, Zepeng
    Cai, Bo
    Weng, Rui
    Zhang, Lixian
    Su, Shun-Feng
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2022, 30 (07) : 2452 - 2465
  • [43] SMC for Discrete Fuzzy Semi-Markov Jump Models With Partly Known Semi-Markov Kernel
    Qi, Wenhai
    Zhang, Jichao
    Park, Ju. H. H.
    Wu, Zheng-Guang
    Yan, Huaicheng
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (08): : 5260 - 5270
  • [44] SEMI-MARKOV REPLACEMENT CHAINS
    GERONTIDIS, II
    ADVANCES IN APPLIED PROBABILITY, 1994, 26 (03) : 728 - 755
  • [45] Entropy maximization for Markov and semi-Markov processes
    Girardin, V
    METHODOLOGY AND COMPUTING IN APPLIED PROBABILITY, 2004, 6 (01) : 109 - 127
  • [46] Semi-Offline Reinforcement Learning for Optimized Text Generation
    Chen, Changyu
    Wang, Xiting
    Jin, Yiqiao
    Dong, Victor Ye
    Dong, Li
    Cao, Jie
    Liu, Yi
    Yan, Rui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [47] A Semi-Markov Decision Model With Inverse Reinforcement Learning for Recognizing the Destination of a Maneuvering Agent in Real Time Strategy Games
    Zeng, Yunxiu
    Xu, Kai
    Qin, Long
    Yin, Quanjun
    IEEE ACCESS, 2020, 8 : 15392 - 15409
  • [48] a Semi-Markov Decision Model with Inverse Reinforcement Learning for Recognizing the Destination of a Maneuvering agent in Real Time Strategy Games
    Zeng Y.
    Xu K.
    Qin L.
    Yin Q.
    IEEE Access, 2020, 8 : 15392 - 15409
  • [49] TESTING FOR MARKOV PROCESS VS SEMI-MARKOV PROCESS
    BHAT, BR
    DESHPANDE, SK
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 1986, 15 (08) : 2375 - 2382
  • [50] COMPUTING THE DISCOUNTED RETURN IN MARKOV AND SEMI-MARKOV CHAINS
    PORTEUS, EL
    NAVAL RESEARCH LOGISTICS, 1981, 28 (04) : 567 - 577