Targeted L1-Regularization and Joint Modeling of Neural Networks for Causal Inference

被引:1
|
作者
Rostami, Mehdi [1 ]
Saarela, Olli [1 ]
机构
[1] Univ Toronto, Dalla Lana Sch Publ Hlth, Toronto, ON M5T 3M7, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
causal Inference; instrumental variables; neural networks; doubly robust estimation; regularization;
D O I
10.3390/e24091290
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The calculation of the Augmented Inverse Probability Weighting (AIPW) estimator of the Average Treatment Effect (ATE) is carried out in two steps, where in the first step, the treatment and outcome are modeled, and in the second step, the predictions are inserted into the AIPW estimator. The model misspecification in the first step has led researchers to utilize Machine Learning algorithms instead of parametric algorithms. However, the existence of strong confounders and/or Instrumental Variables (IVs) can lead the complex ML algorithms to provide perfect predictions for the treatment model which can violate the positivity assumption and elevate the variance of AIPW estimators. Thus the complexity of ML algorithms must be controlled to avoid perfect predictions for the treatment model while still learning the relationship between the confounders and the treatment and outcome. We use two NN architectures with an L-1-regularization on specific NN parameters and investigate how their certain hyperparameters should be tuned in the presence of confounders and Ws to achieve a low bias-variance tradeoff for ATE estimators such as AIPW estimator. Through simulation results, we will provide recommendations as to how NNs can be employed for ATE estimation.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] On discrete l1-regularization
    Micchelli, Charles A.
    Zhao, Tao
    ADVANCES IN COMPUTATIONAL MATHEMATICS, 2013, 38 (02) : 441 - 454
  • [2] Efficient construction of sparse radial basis function neural networks using L1-regularization
    Qian, Xusheng
    Huang, He
    Chen, Xiaoping
    Huang, Tingwen
    NEURAL NETWORKS, 2017, 94 : 239 - 254
  • [3] Application of L1-Regularization Approach in QSAR Problem. Linear Regression and Artificial Neural Networks
    Berdnyk, M., I
    Zakharov, A. B.
    Ivanov, V. V.
    METHODS AND OBJECTS OF CHEMICAL ANALYSIS, 2019, 14 (02): : 79 - 90
  • [4] Quantile regression with l1-regularization and Gaussian kernels
    Shi, Lei
    Huang, Xiaolin
    Tian, Zheng
    Suykens, Johan A. K.
    ADVANCES IN COMPUTATIONAL MATHEMATICS, 2014, 40 (02) : 517 - 551
  • [5] Neural network training using l1-regularization and bi-fidelity data
    De, Subhayan
    Doostan, Alireza
    JOURNAL OF COMPUTATIONAL PHYSICS, 2022, 458
  • [6] Maximal spaces for approximation rates in l1-regularization
    Miller, Philip
    Hohage, Thorsten
    NUMERISCHE MATHEMATIK, 2021, 149 (02) : 341 - 374
  • [7] l1-Regularization in Portfolio Selection with Machine Learning
    Corsaro, Stefania
    De Simone, Valentina
    Marino, Zelda
    Scognamiglio, Salvatore
    MATHEMATICS, 2022, 10 (04)
  • [8] l1-regularization in High -dimensional Statistical Models
    van de Geer, Sara
    PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS, VOL IV: INVITED LECTURES, 2010, : 2351 - +
  • [9] Optimization Solvers for Convex L1-Regularization Problems
    Huang, Tingwei
    2019 INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION, BIG DATA & SMART CITY (ICITBS), 2019, : 657 - 659
  • [10] On Robustness of l1-Regularization Methods for Spectral Estimation
    Karlsson, Johan
    Ning, Lipeng
    2014 IEEE 53RD ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2014, : 1767 - 1773