Learning Causal Effects From Many Randomized Experiments Using Regularized Instrumental Variables

被引:4
|
作者
Peysakhovich, Alexander [1 ]
Eckles, Dean [2 ]
机构
[1] Facebook Artificial Intelligence Res, New York, NY USA
[2] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
关键词
causality; experimentation; instrumental variables; machine learning;
D O I
10.1145/3178876.3186151
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Scientific and business practices are increasingly resulting in large collections of randomized experiments. Analyzed together multiple experiments can tell us things that individual experiments cannot. We study how to learn causal relationships between variables from the kinds of collections faced by modern data scientists: the number of experiments is large, many experiments have very small effects, and the analyst lacks metadata (e.g., descriptions of the interventions). We use experimental groups as instrumental variables (IV) and show that a standard method (two-stage least squares) is biased even when the number of experiments is infinite. We show how a sparsity-inducing l(0) regularization can (in a reversal of the standard bias variance tradeoff) reduce bias (and thus error) of interventional predictions. We are interested in estimating causal effects, rather than just predicting outcomes, so we also propose a modified cross-validation procedure (IVCV) to feasibly select the regularization parameter. We show, using a trick from Monte Carlo sampling, that IVCV can be done using summary statistics instead of raw data. This makes our full procedure simple to use in many real-world applications.
引用
收藏
页码:699 / 707
页数:9
相关论文
共 50 条