Information-Based Optimal Subdata Selection for Big Data Linear Regression

被引:157
|
作者
Wang, HaiYing [1 ]
Yang, Min [2 ]
Stufken, John [3 ]
机构
[1] Univ Connecticut, Dept Stat, Mansfield, CT USA
[2] Univ Illinois, Dept Math Stat & Comp Sci, Chicago, IL USA
[3] Arizona State Univ, Sch Math & Stat Sci, Tempe, AZ 85287 USA
基金
美国国家科学基金会;
关键词
D-optimality; Information matrix; Linear regression; Massive data; Subdata;
D O I
10.1080/01621459.2017.1408468
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large datasets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, that is, the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data. Supplementary materials for this article are available online.
引用
收藏
页码:393 / 405
页数:13
相关论文
共 50 条
  • [42] Big Data Bayesian Linear Regression and Variable Selection by Normal-Inverse-Gamma Summation
    Qian, Hang
    BAYESIAN ANALYSIS, 2018, 13 (04): : 1007 - 1031
  • [43] Mutual information-based feature selection for multilabel classification
    Doquire, Gauthier
    Verleysen, Michel
    NEUROCOMPUTING, 2013, 122 : 148 - 155
  • [44] Praznik: High performance information-based feature selection
    Kursa, Miron B.
    SOFTWAREX, 2021, 16
  • [45] A Study on Mutual Information-Based Feature Selection in Classifiers
    Arundhathi, B.
    Athira, A.
    Rajan, Ranjidha
    ARTIFICIAL INTELLIGENCE AND EVOLUTIONARY COMPUTATIONS IN ENGINEERING SYSTEMS, ICAIECES 2016, 2017, 517 : 479 - 486
  • [46] Analysis of Information-Based Nonparametric Variable Selection Criteria
    Lazecka, Malgorzata
    Mielniczuk, Jan
    ENTROPY, 2020, 22 (09)
  • [47] Information-based Optimal Deployment for a Group of Dynamic Unicycles
    Maode Yan
    Yaoren Guo
    Lei Zuo
    Panpan Yang
    International Journal of Control, Automation and Systems, 2018, 16 : 1824 - 1832
  • [48] Information-based Optimal Deployment for a Group of Dynamic Unicycles
    Yan, Maode
    Guo, Yaoren
    Zuo, Lei
    Yang, Panpan
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2018, 16 (04) : 1824 - 1832
  • [49] CONDITIONAL DYNAMIC MUTUAL INFORMATION-BASED FEATURE SELECTION
    Liu, Huawen
    Mo, Yuchang
    Zhao, Jianmin
    COMPUTING AND INFORMATICS, 2012, 31 (06) : 1193 - 1216
  • [50] Image Inpainting by Block-Based Linear Regression with Optimal Block Selection
    Tanaka, Akira
    Ogawa, Takahiro
    Haseyama, Miki
    2012 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2012,