On learning disentangled representations for individual treatment effect estimation

被引:1
|
作者
Chu, Jiebin [1 ]
Sun, Zhoujian [1 ]
Dong, Wei [2 ]
Shi, Jinlong [3 ]
Huang, Zhengxing [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Chinese Peoples Liberat Army Gen Hosp, Dept Cardiol, Beijing, Peoples R China
[3] Chinese Peoples Liberat Army Gen Hosp, Med Big Data Ctr, Dept Med Innovat Res, Beijing, Peoples R China
关键词
Individualized treatment effect; Causal inference; Deep learning; Disentangled representation; Auxiliary-task learning; Observational data; PROPENSITY SCORE; MODEL; BIAS;
D O I
10.1016/j.jbi.2021.103940
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Objective: Estimating the individualized treatment effect (ITE) from observational data is a challenging task due to selection bias, which results from the distributional discrepancy between different treatment groups caused by the dependence between features and assigned treatments. This dependence is induced by the factors related to the treatment assignment. We hypothesize that features consist of three types of latent factors: outcome-specific factors, treatment-specific factors and confounders. Then, we aim to reduce the influence of treatment-related factors, i.e., treatment-specific factors and confounders, on outcome prediction to mitigate the effects of selection bias. Method: We present a novel representation learning model in which both the main task of outcome prediction and the auxiliary task of classifying the treatment assignment are used to learn the outcome-oriented and treatment-oriented latent representations, respectively. However, since the confounders are related to both treatment assignment and outcome, it is still contained in the representations. To further reduce influence of the confounders contained in both representations, individualized orthogonal regularization is incorporated into the proposed model. The orthogonal regularization forces the outcome-oriented and treatment-oriented latent representations of an individual to be vertical in the inner product space, meaning they are orthogonal with each other, and the common information of confounder is reduced. Such that the ITE can be estimated more precisely without the effects of selection bias. Result: We evaluate our proposed model on a semi-simulated dataset and a real-world dataset. The experimental results demonstrate that the proposed model achieves competitive or better performance compared with the performances of the state-of-the-art models. Conclusion: The proposed method is well performed on ITE estimation with the ability to reduce selection bias thoroughly by incorporating an auxiliary task and adopting orthogonal regularization to disentangle the latent factors. Significance: This paper offers a novel method of reducing selection bias in estimating the ITE from observational data by disentangled representation learning.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Learning Decomposed Representations for Treatment Effect Estimation
    Wu, Anpeng
    Yuan, Junkun
    Kuang, Kun
    Li, Bo
    Wu, Runze
    Zhu, Qiang
    Zhuang, Yueting
    Wu, Fei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (05) : 4989 - 5001
  • [2] Learning Disentangled Representations for Recommendation
    Ma, Jianxin
    Zhou, Chang
    Cui, Peng
    Yang, Hongxia
    Zhu, Wenwu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [3] Learning Disentangled Discrete Representations
    Friede, David
    Reimers, Christian
    Stuckenschmidt, Heiner
    Niepert, Mathias
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT IV, 2023, 14172 : 593 - 609
  • [4] Transfer Learning for Individual Treatment Effect Estimation
    Aloui, Ahmed
    Dong, Juncheng
    Le, Cat P.
    Tarokh, Vahid
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 56 - 66
  • [5] Treatment Effect Estimation with Disentangled Latent Factors
    Zhang, Weijia
    Liu, Lin
    Li, Jiuyong
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10923 - 10930
  • [6] Disentangled representation for sequential treatment effect estimation
    Chu, Jiebin
    Zhang, Yaoyun
    Huang, Fei
    Si, Luo
    Huang, Songfang
    Huang, Zhengxing
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2022, 226
  • [7] Learning Disentangled Representations of Negation and Uncertainty
    Vasilakes, Jake
    Zerva, Chrysoula
    Miwa, Makoto
    Ananiadou, Sophia
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 8380 - 8397
  • [8] Domain Agnostic Learning with Disentangled Representations
    Peng, Xingchao
    Huang, Zijun
    Sun, Ximeng
    Saenko, Kate
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [9] A Contrastive Objective for Learning Disentangled Representations
    Kahana, Jonathan
    Hoshen, Yedid
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 579 - 595
  • [10] Learning disentangled representations in the imaging domain
    Liu, Xiao
    Sanchez, Pedro
    Thermos, Spyridon
    O'Neil, Alison Q.
    Tsaftaris, Sotirios A.
    MEDICAL IMAGE ANALYSIS, 2022, 80