p-Values for High-Dimensional Regression

被引:270
|
作者
Meinshausen, Nicolai [1 ]
Meier, Lukas [2 ]
Buehlmann, Peter [2 ]
机构
[1] Univ Oxford, Dept Stat, Oxford OX1 3TG, England
[2] ETH, CH-8092 Zurich, Switzerland
关键词
Data splitting; False discovery rate; Family-wise error rate; High-dimensional variable selection; Multiple comparisons; FALSE DISCOVERY RATE; VARIABLE SELECTION; ADAPTIVE LASSO; LINEAR-MODELS; RECOVERY;
D O I
10.1198/jasa.2009.tm08647
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Assigning significance in high-dimensional regression is challenging. Most computationally efficient selection algorithms cannot guard against inclusion of noise variables. Asymptotically valid p-values are not available. An exception is a recent proposal by Wasserman and Roeder that splits the data into two parts. The number of variables is then reduced to a manageable size using the first split, while classical variable selection techniques can be applied to the remaining variables, using the data from the second split. This yields asymptotic error control under minimal conditions. This involves a one-time random split of the data, however. Results are sensitive to this arbitrary choice, which amounts to a "p-value lottery" and makes it difficult to reproduce results. Here we show that inference across multiple random splits can be aggregated while maintaining asymptotic control over the inclusion of noise variables. We show that the resulting p-values can be used for control of both family-wise error and false discovery rate. In addition, the proposed aggregation is shown to improve power while reducing the number of falsely selected variables substantially.
引用
收藏
页码:1671 / 1681
页数:11
相关论文
共 50 条
  • [41] Statement on P-values
    Gwise, Thomas
    Rothmann, Mark D.
    James Hung, H. M.
    Amatya, Anup
    Rothwell, Rebecca
    Wang, Sue Jane
    Wu, Yute
    Smith, Fraser
    Weng, Yu-Ting
    Andraca-Carrera, Eugenio
    Grosser, Stella
    Chattopadhyay, Somesh
    Collins, Sylva H.
    [J]. STATISTICS IN BIOPHARMACEUTICAL RESEARCH, 2021, 13 (01): : 57 - 58
  • [42] High-Dimensional Structured Quantile Regression
    Sivakumar, Vidyashankar
    Banerjee, Arindam
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [43] On robust regression with high-dimensional predictors
    El Karoui, Noureddine
    Bean, Derek
    Bickel, Peter J.
    Lim, Chinghway
    Yu, Bin
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2013, 110 (36) : 14557 - 14562
  • [44] On constrained and regularized high-dimensional regression
    Shen, Xiaotong
    Pan, Wei
    Zhu, Yunzhang
    Zhou, Hui
    [J]. ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 2013, 65 (05) : 807 - 832
  • [45] The conditionality principle in high-dimensional regression
    Azriel, D.
    [J]. BIOMETRIKA, 2019, 106 (03) : 702 - 707
  • [46] GENERALIZED p-VALUES FOR TESTING REGRESSION COEFFICIENTS IN PARTIALLY LINEAR MODELS
    Hu, Huimin
    Xu, Xingzhong
    Li, Guoying
    [J]. JOURNAL OF SYSTEMS SCIENCE & COMPLEXITY, 2010, 23 (06) : 1118 - 1132
  • [47] Model Selection in Logistic Regression Using p-Values and Greedy Search
    Mielniczuk, Jan
    Teisseyre, Pawel
    [J]. SECURITY AND INTELLIGENT INFORMATION SYSTEMS, 2012, 7053 : 128 - 141
  • [48] Generalized p-values for testing regression coefficients in partially linear models
    Huimin Hu
    Xingzhong Xu
    Guoying Li
    [J]. Journal of Systems Science and Complexity, 2010, 23 : 1118 - 1132
  • [49] Low P-values exclude nothing, and P-values are no substitute for measures of effect
    Stang, Andreas
    [J]. JOURNAL OF CLINICAL EPIDEMIOLOGY, 2011, 64 (04) : 452 - 453
  • [50] Influence Diagnostics for High-Dimensional Lasso Regression
    Rajaratnam, Bala
    Roberts, Steven
    Sparks, Doug
    Yu, Honglin
    [J]. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2019, 28 (04) : 877 - 890