Extended ADMM for general penalized quantile regression with linear constraints in big data

被引:0
|
作者
Liu, Yongxin [1 ]
Zeng, Peng [2 ]
机构
[1] Nanjing Audit Univ, Sch Stat & Data Sci, Nanjing, Peoples R China
[2] Auburn Univ, Dept Math & Stat, Auburn, AL USA
基金
中国国家自然科学基金;
关键词
ADMM; Big data; General penalty; Linear constraints; Quantile regression; VARIABLE SELECTION; LASSO;
D O I
10.1080/03610918.2023.2249271
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Quantile regression offers a powerful means of understanding the comprehensive relationship between response variables and predictors. By formulating prior domain knowledge and assumptions as constraints on parameters, the estimation efficiency can be enhanced. This paper studies some methods based on multi-block ADMM (Alternating Direction Method of Multipliers) to fit general penalized quantile regression models with linear constraints on regression coefficients. Different formulations for handling linear constraints and general penalties are explored and compared. Among these formulations, the most efficient one is identified, which provides an explicit expression for each parameter during iterations and eliminates the nested-loop in existing algorithms. Furthermore, this work addresses the challenges posed by big data by developing a parallel ADMM algorithm suitable for distributed data storage. The algorithm's convergence and a robust stopping criterion are established. To demonstrate the excellent performance of the proposed algorithms, extensive numerical experiments and a real data example are presented. These empirical validations showcase the effectiveness of the methods in handling complex datasets. The details of theoretical proofs and different algorithm variations are provided in the Appendix.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] ADMM for Penalized Quantile Regression in Big Data
    Yu, Liqun
    Lin, Nan
    [J]. INTERNATIONAL STATISTICAL REVIEW, 2017, 85 (03) : 494 - 518
  • [2] Generalized l1-penalized quantile regression with linear constraints
    Liu, Yongxin
    Zeng, Peng
    Lin, Lu
    [J]. COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2020, 142
  • [3] ADMM for High-Dimensional Sparse Penalized Quantile Regression
    Gu, Yuwen
    Fan, Jun
    Kong, Lingchen
    Ma, Shiqian
    Zou, Hui
    [J]. TECHNOMETRICS, 2018, 60 (03) : 319 - 331
  • [4] ADMM-Based Differential Privacy Learning for Penalized Quantile Regression on Distributed Functional Data
    Zhou, Xingcai
    Xiang, Yu
    [J]. MATHEMATICS, 2022, 10 (16)
  • [5] On Linear Convergence of ADMM for Decentralized Quantile Regression
    Wang, Yue
    Lian, Heng
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 3945 - 3955
  • [6] Penalized Quantile Regression for Distributed Big Data Using the Slack Variable Representation
    Fan, Ye
    Lin, Nan
    Yin, Xianjun
    [J]. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2021, 30 (03) : 557 - 565
  • [7] ADMM for Sparse-Penalized Quantile Regression with Non-Convex Penalties
    Mirzaeifard, Reza
    Venkategowda, Naveen K. D.
    Gogineni, Vinay Chakravarthi
    Werner, Stefan
    [J]. 2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 2046 - 2050
  • [8] Penalized quantile regression for dynamic panel data
    Galvao, Antonio F.
    Montes-Rojas, Gabriel V.
    [J]. JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2010, 140 (11) : 3476 - 3497
  • [9] Penalized function-on-function linear quantile regression
    Beyaztas, Ufuk
    Shang, Han Lin
    Saricam, Semanur
    [J]. COMPUTATIONAL STATISTICS, 2024,
  • [10] Smoothing ADMM for Sparse-Penalized Quantile Regression With Non-Convex Penalties
    Mirzaeifard, Reza
    Venkategowda, Naveen K. D.
    Gogineni, Vinay Chakravarthi
    Werner, Stefan
    [J]. IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2024, 5 : 213 - 228