Federated Offline Reinforcement Learning

被引:0
|
作者
Zhou, Doudou [1 ]
Zhang, Yufeng [2 ,3 ]
Sonabend-W, Aaron [1 ]
Wang, Zhaoran [2 ,3 ]
Lu, Junwei [1 ]
Cai, Tianxi [1 ,4 ]
机构
[1] Harvard TH Chan Sch Publ Hlth, Dept Biostat, Boston, MA 02115 USA
[2] Northwestern Univ, Dept Ind Engn, Evanston, IL 60208 USA
[3] Northwestern Univ, Dept Management Sci, Evanston, IL 60208 USA
[4] Harvard Med Sch, Dept Biomed Informat, Boston, MA USA
关键词
Dynamic treatment regimes; Electronic health records; Multi-source learning; GUIDELINES; MODELS;
D O I
10.1080/01621459.2024.2310287
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Evidence-based or data-driven dynamic treatment regimes are essential for personalized medicine, which can benefit from offline reinforcement learning (RL). Although massive healthcare data are available across medical institutions, they are prohibited from sharing due to privacy constraints. Besides, heterogeneity exists in different sites. As a result, federated offline RL algorithms are necessary and promising to deal with the problems. In this article, we propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites. The proposed model makes the analysis of the site-level features possible. We design the first federated policy optimization algorithm for offline RL with sample complexity. The proposed algorithm is communication-efficient, which requires only a single round of communication interaction by exchanging summary statistics. We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed. Extensive simulations demonstrate the effectiveness of the proposed algorithm. The method is applied to a sepsis dataset in multiple sites to illustrate its use in clinical settings. Supplementary materials for this article are available online including a standardized description of the materials available for reproducing the work.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Federated Offline Reinforcement Learning With Multimodal Data
    Wen, Jiabao
    Dai, Huiao
    He, Jingyi
    Xi, Meng
    Xiao, Shuai
    Yang, Jiachen
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 4266 - 4276
  • [2] Federated Offline Reinforcement Learning with Proximal Policy Evaluation
    Sheng YUE
    Yongheng DENG
    Guanbo WANG
    Ju REN
    Yaoxue ZHANG
    Chinese Journal of Electronics, 2024, 33 (06) : 1360 - 1372
  • [3] Federated Offline Reinforcement Learning with Proximal Policy Evaluation
    Yue, Sheng
    Deng, Yongheng
    Wang, Guanbo
    Ren, Ju
    Zhang, Yaoxue
    Chinese Journal of Electronics, 2024, 33 (06) : 1360 - 1372
  • [4] Offline Reinforcement Learning with Pseudometric Learning
    Dadashi, Robert
    Rezaeifar, Shideh
    Vieillard, Nino
    Hussenot, Leonard
    Pietquin, Olivier
    Geist, Matthieu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [5] Benchmarking Offline Reinforcement Learning
    Tittaferrante, Andrew
    Yassine, Abdulsalam
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 259 - 263
  • [6] Distributed Offline Reinforcement Learning
    Heredia, Paulo
    George, Jemin
    Mou, Shaoshuai
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 4621 - 4626
  • [7] Learning Behavior of Offline Reinforcement Learning Agents
    Shukla, Indu
    Dozier, Haley. R.
    Henslee, Althea. C.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS VI, 2024, 13051
  • [8] Bootstrapped Transformer for Offline Reinforcement Learning
    Wang, Kerong
    Zhao, Hanye
    Luo, Xufang
    Ren, Kan
    Zhang, Weinan
    Li, Dongsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Offline Reinforcement Learning with Differential Privacy
    Qiao, Dan
    Wang, Yu-Xiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [10] Conservative Offline Distributional Reinforcement Learning
    Ma, Yecheng Jason
    Jayaraman, Dinesh
    Bastani, Osbert
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34