Direct Learning With Multi-Task Neural Networks for Treatment Effect Estimation

被引:3
|
作者
Zhu, Fujin [1 ]
Lu, Jie [1 ]
Lin, Adi [1 ]
Xuan, Junyu [1 ]
Zhang, Guangquan [1 ]
机构
[1] Univ Technol Sydney, Fac Engn & IT, Ctr Artificial Intelligence CAI, Ultimo, NSW 2007, Australia
基金
澳大利亚研究理事会;
关键词
Representation learning; Heart; Neural networks; Supervised learning; Education; Decision making; Estimation; Causal inference; treatment effect estimation; multi-task learning; neural networks;
D O I
10.1109/TKDE.2021.3112591
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Causal inference from observational data lies at the heart of education, healthcare, optimal resource allocation and many other decision-making processes. Most of existing methods estimate the target treatment effect indirectly by inferring the underlying treatment response functions or the unobserved counterfactual outcome for every individual. These indirect learning methods are subject to issues of model misspecification and high variability. As a complement of existing indirect learning methods, in this paper, we propose a direct learning framework, called HTENet, for causal inference using deep multi-task learning. It is based on a novel empirical t-risk for learning the causal effect model of direct interest in a supervised learning scheme. In our proposed framework, the target treatment effect model is parametrized as a neural network and learned jointly with other auxiliary models in an end-to-end manner. Moreover, we extend the naive HTENet into other two variants, HTENet-Simple and HTENet-Reg, by further incorporating shared representation learning layers and a propensity prediction regularizer. Experiments on simulated and real data demonstrate that the performances of the proposed methods match or are better than that of existing state-of-arts. Moreover, by learning the target treatment effect function directly, the proposed methods tend to obtain more stable estimates than existing indirect methods.
引用
收藏
页码:2457 / 2470
页数:14
相关论文
共 50 条
  • [1] Convex Multi-Task Learning with Neural Networks
    Ruiz, Carlos
    Alaiz, Carlos M.
    Dorronsoro, Jose R.
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, HAIS 2022, 2022, 13469 : 223 - 235
  • [2] Multi-Task Adversarial Learning for Treatment Effect Estimation in Basket Trials
    Chu, Zhixuan
    Rathbun, Stephen L.
    Li, Sheng
    CONFERENCE ON HEALTH, INFERENCE, AND LEARNING, VOL 174, 2022, 174 : 79 - 91
  • [3] A Multi-task Learning Method for Direct Estimation of Spinal Curvature
    Wang, Jiacheng
    Wang, Liansheng
    Liu, Changhua
    COMPUTATIONAL METHODS AND CLINICAL APPLICATIONS FOR SPINE IMAGING, CSI 2019, 2020, 11963 : 113 - 118
  • [4] Multi-task Learning Neural Networks for Comparative Elements Extraction
    Liu, Dianqing
    Wang, Lihui
    Shao, Yanqiu
    CHINESE LEXICAL SEMANTICS (CLSW 2020), 2021, 12278 : 398 - 407
  • [5] Multi-Task Learning Based on Stochastic Configuration Neural Networks
    Dong, Xue-Mei
    Kong, Xudong
    Zhang, Xiaoping
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2022, 10
  • [6] Multi-Task Reinforcement Meta-Learning in Neural Networks
    Shakah, Ghazi
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (07) : 263 - 269
  • [7] Evolving Deep Parallel Neural Networks for Multi-Task Learning
    Wu, Jie
    Sun, Yanan
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT II, 2022, 13156 : 517 - 531
  • [8] Multi-Adaptive Optimization for multi-task learning with deep neural networks
    Hervella, alvaro S.
    Rouco, Jose
    Novo, Jorge
    Ortega, Marcos
    NEURAL NETWORKS, 2024, 170 : 254 - 265
  • [9] Deep Convolutional Neural Networks for Multi-Instance Multi-Task Learning
    Zeng, Tao
    Ji, Shuiwang
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 579 - 588
  • [10] Multi-Task Learning with Capsule Networks
    Lei, Kai
    Fu, Qiuai
    Liang, Yuzhi
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,