Semi-Markov Offline Reinforcement Learning for Healthcare

被引:0
|
作者
Fatemi, Mehdi [1 ]
Wu, Mary [2 ]
Petch, Jeremy [3 ]
Nelson, Walter [3 ]
Connolly, Stuart J. [4 ]
Benz, Alexander [4 ]
Carnicelli, Anthony [5 ]
Ghassemi, Marzyeh [6 ]
机构
[1] Microsoft Res, Redmond, WA 98052 USA
[2] Univ Toronto, Toronto, ON, Canada
[3] Hamilton Hlth Sci, Hamilton, ON, Canada
[4] Populat Hlth Res Inst, Hamilton, ON, Canada
[5] Duke Univ, Durham, NC 27706 USA
[6] MIT, Cambridge, MA 02139 USA
基金
加拿大自然科学与工程研究理事会;
关键词
WARFARIN;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning (RL) tasks are typically framed as Markov Decision Processes (MDPs), assuming that decisions are made at fixed time intervals. However, many applications of great importance, including healthcare, do not satisfy this assumption, yet they are commonly modelled as MDPs after an artificial reshaping of the data. In addition, most healthcare (and similar) problems are offline by nature, allowing for only retrospective studies. To address both challenges, we begin by discussing the Semi-MDP (SMDP) framework, which formally handles actions of variable timings. We next present a formal way to apply SMDP modifications to nearly any given value-based offline RL method. We use this theory to introduce three SMDP-based offline RL algorithms, namely, SDQN, SDDQN, and SBCQ. We then experimentally demonstrate that only these SMDP-based algorithms learn the optimal policy in variable-time environments, whereas their MDP counterparts do not. Finally, we apply our new algorithms to a real-world offline dataset pertaining to warfarin dosing for stroke prevention and demonstrate similar results.
引用
收藏
页码:119 / 137
页数:19
相关论文
共 50 条
  • [1] Semi-Markov Reinforcement Learning for Stochastic Resource Collection
    Schmoll, Sebastian
    Schubert, Matthias
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3349 - 3355
  • [2] A Sojourn-Based Approach to Semi-Markov Reinforcement Learning
    Ascione, Giacomo
    Cuomo, Salvatore
    JOURNAL OF SCIENTIFIC COMPUTING, 2022, 92 (02)
  • [3] An Inverse Reinforcement Learning Algorithm for semi-Markov Decision Processes
    Tan, Chuanfang
    Li, Yanjie
    Cheng, Yuhu
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 1256 - 1261
  • [4] Average Reward Reinforcement Learning for Semi-Markov Decision Processes
    Yang, Jiayuan
    Li, Yanjie
    Chen, Haoyao
    Li, Jiangang
    NEURAL INFORMATION PROCESSING, ICONIP 2017, PT I, 2017, 10634 : 768 - 777
  • [5] A Sojourn-Based Approach to Semi-Markov Reinforcement Learning
    Giacomo Ascione
    Salvatore Cuomo
    Journal of Scientific Computing, 2022, 92
  • [6] Offline and online identification of hidden semi-Markov models
    Azimi, M
    Nasiopoulos, P
    Ward, RK
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2005, 53 (08) : 2658 - 2663
  • [7] RVI Reinforcement Learning for Semi-Markov Decision Processes with Average Reward
    Li, Yanjie
    Cao, Fang
    2010 8TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2010, : 1674 - 1679
  • [8] Solving semi-Markov decision problems using average reward reinforcement learning
    Das, TK
    Gosavi, A
    Mahadevan, S
    Marchalleck, N
    MANAGEMENT SCIENCE, 1999, 45 (04) : 560 - 574
  • [9] Solving semi-Markov decision problems using average reward reinforcement learning
    Dept. Indust. and Mgmt. Syst. Eng., University of South Florida, Tampa, FL 33620, United States
    不详
    不详
    Manage Sci, 4 (560-574):
  • [10] Adaptive Honeypot Engagement Through Reinforcement Learning of Semi-Markov Decision Processes
    Huang, Linan
    Zhu, Quanyan
    DECISION AND GAME THEORY FOR SECURITY, 2019, 11836 : 196 - 216