Learning Causal Effects via Weighted Empirical Risk Minimization

被引:0
|
作者
Jung, Yonghan [1 ]
Tian, Jin [2 ]
Bareinboim, Elias [3 ]
机构
[1] Purdue Univ Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
[2] Iowa State Univ, Dept Comp Sci, Ames, IA 50011 USA
[3] Columbia Univ, Dept Comp Sci, New York, NY 10027 USA
关键词
INFERENCE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning causal effects from data is a fundamental problem across the sciences. Determining the identifiability of a target effect from a combination of the observational distribution and the causal graph underlying a phenomenon is well-understood in theory. However, in practice, it remains a challenge to apply the identification theory to estimate the identified causal functionals from finite samples. Although a plethora of effective estimators have been developed under the setting known as the back-door (also called conditional ignorability), there exists still no systematic way of estimating arbitrary causal functionals that are both computationally and statistically attractive. This paper aims to bridge this gap, from causal identification to causal estimation. We note that estimating functionals from limited samples based on the empirical risk minimization (ERM) principle has been pervasive in the machine learning literature, and these methods have been extended to causal inference under the back-door setting. In this paper, we develop a learning framework that marries two families of methods, benefiting from the generality of the causal identification theory and the effectiveness of the estimators produced based on the principle of ERM. Specifically, we develop a sound and complete algorithm that generates causal functionals in the form of weighted distributions that are amenable to the ERM optimization. We then provide a practical procedure for learning causal effects from finite samples and a causal graph. Finally, experimental results support the effectiveness of our approach.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Efficient Private Empirical Risk Minimization for High-dimensional Learning
    Kasiviswanathan, Shiva Prasad
    Jin, Hongxia
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [22] Empirical risk minimization in the interpolating regime with application to neural network learning
    Muecke, Nicole
    Steinwart, Ingo
    MACHINE LEARNING, 2025, 114 (04)
  • [23] Explainable empirical risk minimization
    Zhang, Linli
    Karakasidis, Georgios
    Odnoblyudova, Arina
    Dogruel, Leyla
    Tian, Yu
    Jung, Alex
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (08): : 3983 - 3996
  • [24] TRANSFER BAYESIAN META-LEARNING VIA WEIGHTED FREE ENERGY MINIMIZATION
    Zhang, Yunchuan
    Jose, Sharu Theresa
    Simeone, Osvaldo
    2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [25] Risk-Sensitive Learning via Expected Shortfall Minimization
    Kashima, Hisashi
    PROCEEDINGS OF THE SIXTH SIAM INTERNATIONAL CONFERENCE ON DATA MINING, 2006, : 529 - 533
  • [26] Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method
    Eisen, Mark
    Mokhtari, Aryan
    Ribeiro, Alejandro
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [27] Efficient Nonconvex Empirical Risk Minimization via Adaptive Sample Size Methods
    Mokhtari, Aryan
    Ozdaglar, Asuman
    Jadbabaie, Ali
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [28] Guaranteed distributed machine learning: Privacy-preserving empirical risk minimization
    Owusu-Agyemang, Kwabena
    Qin, Zhen
    Benjamin, Appiah
    Xiong, Hu
    Qin, Zhiguang
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2021, 18 (04) : 4772 - 4796
  • [29] Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization
    Jayaraman, Bargav
    Wang, Lingxiao
    Evans, David
    Gu, Quanquan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [30] Minimization of Empirical Risk as a Means of Choosing the Number of Hypotheses in Algebraic Machine Learning
    Vinogradov, D. V.
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2023, 33 (03) : 525 - 528