POST-SELECTION INFERENCE IN THREE-DIMENSIONAL PANEL DATA

被引:0
|
作者
Chiang, Harold D. [1 ]
Rodrigue, Joel [2 ]
Sasaki, Yuya [2 ]
机构
[1] Univ Wisconsin, Madison, WI USA
[2] Vanderbilt Univ, Nashville, TN 37235 USA
关键词
DATA MODELS; CONFIDENCE-INTERVALS; SHRINKAGE ESTIMATION; QUANTILE REGRESSION; VARIABLE SELECTION; UNIFORM INFERENCE; PARAMETERS; REGIONS; NUMBER;
D O I
10.1017/S0266466622000081
中图分类号
F [经济];
学科分类号
02 ;
摘要
Three-dimensional panel models are widely used in empirical analysis. Researchers use various combinations of fixed effects for three-dimensional panels while the correct specification is unknown. When one imposes a parsimonious model and the true model is rich in complexity, the fitted model inevitably incurs the consequences of misspecification including potential bias. When a richly specified model is employed and the true model is parsimonious, then the consequences typically include a poor fit with larger standard errors than necessary. It is therefore useful for researchers to have good model selection techniques that assist in determining the "true" model or a satisfactory approximation. In this light, Lu, Miao, and Su (2021, Econometric Reviews 40, 867-898) propose methods of model selection. We advance this literature by proposing a method of post-selection inference for regression parameters. Despite our use of the lasso technique as the means of model selection, our assumptions allow for many and even all fixed effects to be nonzero. This property is important to avoid a degenerate distribution of fixed effects which often reflect economic sizes of countries in gravity analyses of trade. Using an international trade database, we document evidence that our key assumption of approximately sparse fixed effects is plausibly satisfied for gravity analyses of trade. We also establish the uniform size control over alternative data generating processes of fixed effects. Simulation studies demonstrate that the proposed method is less biased than under-fitting fixed effect estimators, is more efficient than over-fitting fixed effect estimators, and robustly allows for inference that is as accurate as the oracle estimator.
引用
收藏
页码:623 / 658
页数:36
相关论文
共 50 条
  • [1] Post-Selection Inference
    Kuchibhotla, Arun K.
    Kolassa, John E.
    Kuffner, Todd A.
    [J]. ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION, 2022, 9 : 505 - 527
  • [2] Multicarving for high-dimensional post-selection inference
    Schultheiss, Christoph
    Renaux, Claude
    Buehlmann, Peter
    [J]. ELECTRONIC JOURNAL OF STATISTICS, 2021, 15 (01): : 1695 - 1742
  • [3] VALID POST-SELECTION INFERENCE
    Berk, Richard
    Brown, Lawrence
    Buja, Andreas
    Zhang, Kai
    Zhao, Linda
    [J]. ANNALS OF STATISTICS, 2013, 41 (02): : 802 - 837
  • [4] Post-selection inference in regression models for group testing data
    Shen, Qinyan
    Gregory, Karl
    Huang, Xianzheng
    [J]. BIOMETRICS, 2024, 80 (03)
  • [5] Splitting strategies for post-selection inference
    Rasines, D. Garcia
    Young, G. A.
    [J]. BIOMETRIKA, 2023, 110 (03) : 597 - 614
  • [6] On Post-selection Inference in A/B Testing
    Deng, Alex
    Li, Yicheng
    Lu, Jiannan
    Ramamurthy, Vivek
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2743 - 2752
  • [7] POST-SELECTION INFERENCE VIA ALGORITHMIC STABILITY
    Zrnic, Tijana
    Jordan, Michael I.
    [J]. ANNALS OF STATISTICS, 2023, 51 (04): : 1666 - 1691
  • [8] Post-Selection Inference with HSIC-Lasso
    Freidling, Tobias
    Poignard, Benjamin
    Climente-Gonzalez, Hector
    Yamada, Makoto
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [9] EXACT POST-SELECTION INFERENCE, WITH APPLICATION TO THE LASSO
    Lee, Jason D.
    Sun, Dennis L.
    Sun, Yuekai
    Taylor, Jonathan E.
    [J]. ANNALS OF STATISTICS, 2016, 44 (03): : 907 - 927
  • [10] kernelPSI: a Post-Selection Inference Framework for Nonlinear Variable Selection
    Slim, Lotfi
    Chatelain, Clement
    Azencott, Chloe-Agathe
    Vert, Jean-Philippe
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97