Targeted L1-Regularization and Joint Modeling of Neural Networks for Causal Inference

被引:1
|
作者
Rostami, Mehdi [1 ]
Saarela, Olli [1 ]
机构
[1] Univ Toronto, Dalla Lana Sch Publ Hlth, Toronto, ON M5T 3M7, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
causal Inference; instrumental variables; neural networks; doubly robust estimation; regularization;
D O I
10.3390/e24091290
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The calculation of the Augmented Inverse Probability Weighting (AIPW) estimator of the Average Treatment Effect (ATE) is carried out in two steps, where in the first step, the treatment and outcome are modeled, and in the second step, the predictions are inserted into the AIPW estimator. The model misspecification in the first step has led researchers to utilize Machine Learning algorithms instead of parametric algorithms. However, the existence of strong confounders and/or Instrumental Variables (IVs) can lead the complex ML algorithms to provide perfect predictions for the treatment model which can violate the positivity assumption and elevate the variance of AIPW estimators. Thus the complexity of ML algorithms must be controlled to avoid perfect predictions for the treatment model while still learning the relationship between the confounders and the treatment and outcome. We use two NN architectures with an L-1-regularization on specific NN parameters and investigate how their certain hyperparameters should be tuned in the presence of confounders and Ws to achieve a low bias-variance tradeoff for ATE estimators such as AIPW estimator. Through simulation results, we will provide recommendations as to how NNs can be employed for ATE estimation.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Object Tracking via 2DPCA and l1-Regularization
    Wang, Dong
    Lu, Huchuan
    IEEE SIGNAL PROCESSING LETTERS, 2012, 19 (11) : 711 - 714
  • [32] Convergence rates in l1-regularization when the basis is not smooth enough
    Flemming, Jens
    Hegland, Markus
    APPLICABLE ANALYSIS, 2015, 94 (03) : 464 - 476
  • [33] A unified approach to convergence rates for l1-regularization and lacking sparsity
    Flemming, Jens
    Hofmann, Bernd
    Veselic, Ivan
    JOURNAL OF INVERSE AND ILL-POSED PROBLEMS, 2016, 24 (02): : 139 - 148
  • [34] On Tolerant Fuzzy c-Means Clustering with L1-Regularization
    Hamasuna, Yukihiro
    Endo, Yasunori
    Miyamoto, Sadaaki
    PROCEEDINGS OF THE JOINT 2009 INTERNATIONAL FUZZY SYSTEMS ASSOCIATION WORLD CONGRESS AND 2009 EUROPEAN SOCIETY OF FUZZY LOGIC AND TECHNOLOGY CONFERENCE, 2009, : 1152 - 1157
  • [35] Direct Incorporation of L1-Regularization into Generalized Matrix Learning Vector Quantization
    Lischke, Falko
    Neumann, Thomas
    Hellbach, Sven
    Villmann, Thomas
    Boehme, Hans-Joachim
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 657 - 667
  • [36] L1-Regularization Based EEG Feature Learning for Detecting Epileptic Seizure
    Hussein, Ramy
    Wang, Z. Jane
    Ward, Rabab
    2016 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2016, : 1171 - 1175
  • [37] Binarized l1-Regularization Parameters Enhanced Stripe-Wise Optimization Algorithm for Efficient Neural Network Optimization
    Ma, Xiaotian
    Sun, Pu
    Luo, Shiyi
    Peng, Qidi
    Demara, Ronald F.
    Bai, Yu
    IEEE JOURNAL OF EMERGING AND SELECTED TOPICS IN INDUSTRIAL ELECTRONICS, 2024, 5 (02): : 790 - 799
  • [38] Structured Neural Networks for Density Estimation and Causal Inference
    Chen, Asic
    Shi, Ruian
    Gao, Xiang
    Baptista, Ricardo
    Krishnan, Rahul G.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [39] Using of Ellipsoid Method for Finding Linear Regression Parameters with L1-Regularization
    Stetsyuk, Petro
    Stovba, Viktor
    Korablov, Mykola
    INFORMATION TECHNOLOGIES AND THEIR APPLICATIONS, PT I, ITTA 2024, 2025, 2225 : 350 - 362
  • [40] Accelerating Cross-Validation in Multinomial Logistic Regression with l1-Regularization
    Obuchi, Tomoyuki
    Kabashima, Yoshiyuki
    JOURNAL OF MACHINE LEARNING RESEARCH, 2018, 19