CURE: A deep learning framework pre-trained on large-scale patient data for treatment effect estimation

被引:3
|
作者
Liu, Ruoqi [1 ]
Chen, Pin-Yu [2 ]
Zhang, Ping [1 ,3 ,4 ]
机构
[1] Ohio State Univ, Dept Comp Sci & Engn, 2015 Neil Ave, Columbus, OH 43210 USA
[2] IBM Res, Yorktown Hts, NY 10598 USA
[3] Ohio State Univ, Dept Biomed Informat, 1800 Cannon Dr, Columbus, OH 43210 USA
[4] Ohio State Univ, Translat Data Analyt Inst, 1760 Neil Ave, Columbus, OH 43210 USA
来源
PATTERNS | 2024年 / 5卷 / 06期
基金
美国国家卫生研究院;
关键词
RANDOMIZED CONTROLLED-TRIALS; CAUSAL INFERENCE; APIXABAN; WARFARIN;
D O I
10.1016/j.patter.2024.100973
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Treatment effect estimation (TEE) aims to identify the causal effects of treatments on important outcomes. Current machine -learning -based methods, mainly trained on labeled data for specific treatments or outcomes, can be sub -optimal with limited labeled data. In this article, we propose a new pre -training and fine-tuning framework, CURE (causal treatment effect estimation), for TEE from observational data. CURE is pre -trained on large-scale unlabeled patient data to learn representative contextual patient representations and fine-tuned on labeled patient data for TEE. We present a new sequence encoding approach for longitudinal patient data embedding both structure and time. Evaluated on four downstream TEE tasks, CURE outperforms the state-of-the-art methods, marking a 7% increase in area under the precision -recall curve and an 8% rise in the influence -function -based precision of estimating heterogeneous effects. Validation with four randomized clinical trials confirms its efficacy in producing trial conclusions, highlighting CURE's capacity to supplement traditional clinical trials.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Large pre-trained models for treatment effect estimation: Are we there yet?
    Li, Sheng
    PATTERNS, 2024, 5 (06):
  • [2] Learning to Select Pre-trained Deep Representations with Bayesian Evidence Framework
    Kim, Yong-Deok
    Jang, Taewoong
    Han, Bohyung
    Choi, Seungjin
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5318 - 5326
  • [3] CPM: A large-scale generative Chinese Pre-trained language model
    Zhang, Zhengyan
    Han, Xu
    Zhou, Hao
    Ke, Pei
    Gu, Yuxian
    Ye, Deming
    Qin, Yujia
    Su, Yusheng
    Ji, Haozhe
    Guan, Jian
    Qi, Fanchao
    Wang, Xiaozhi
    Zheng, Yanan
    Zeng, Guoyang
    Cao, Huanqi
    Chen, Shengqi
    Li, Daixuan
    Sun, Zhenbo
    Liu, Zhiyuan
    Huang, Minlie
    Han, Wentao
    Tang, Jie
    Li, Juanzi
    Zhu, Xiaoyan
    Sun, Maosong
    AI OPEN, 2021, 2 : 93 - 99
  • [4] Large-Scale Relation Learning for Question Answering over Knowledge Bases with Pre-trained Language Models
    Yam, Yuanmeng
    Li, Rumei
    Wang, Sirui
    Zhang, Hongzhi
    Zan, Daoguang
    Zhang, Fuzheng
    Wu, Wei
    Xu, Weiran
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3653 - 3660
  • [5] TrafficBERT: Pre-trained model with large-scale data for long-range traffic flow forecasting
    Jin, KyoHoon
    Wi, JeongA
    Lee, EunJu
    Kang, ShinJin
    Kim, SooKyun
    Kim, YoungBin
    Expert Systems with Applications, 2021, 186
  • [6] TrafficBERT: Pre-trained model with large-scale data for long-range traffic flow forecasting
    Jin, KyoHoon
    Wi, JeongA
    Lee, EunJu
    Kang, ShinJin
    Kim, SooKyun
    Kim, YoungBin
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 186
  • [7] Exploring the Application of Large-Scale Pre-Trained Models on Adverse Weather Removal
    Tan, Zhentao
    Wu, Yue
    Liu, Qiankun
    Chu, Qi
    Lu, Le
    Ye, Jieping
    Yu, Nenghai
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 1683 - 1698
  • [8] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Xiao Wang
    Guangyao Chen
    Guangwu Qian
    Pengcheng Gao
    Xiao-Yong Wei
    Yaowei Wang
    Yonghong Tian
    Wen Gao
    Machine Intelligence Research, 2023, 20 : 447 - 482
  • [9] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Wang, Xiao
    Chen, Guangyao
    Qian, Guangwu
    Gao, Pengcheng
    Wei, Xiao-Yong
    Wang, Yaowei
    Tian, Yonghong
    Gao, Wen
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (04) : 447 - 482
  • [10] FASTERMOE: Modeling and Optimizing Training of Large-Scale Dynamic Pre-Trained Models
    He, Jiaao
    Zhai, Jidong
    Antunes, Tiago
    Wang, Haojie
    Luo, Fuwen
    Shi, Shangfeng
    Li, Qin
    PPOPP'22: PROCEEDINGS OF THE 27TH ACM SIGPLAN SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING, 2022, : 120 - 134