Independently Interpretable Lasso for Generalized Linear Models

被引:1
|
作者
Takada, Masaaki [1 ]
Suzuki, Taiji [2 ,3 ,4 ]
Fujisawa, Hironori [1 ,4 ,5 ]
机构
[1] SOKENDAI, Grad Univ Adv Studies, Tokyo 1908562, Japan
[2] Univ Tokyo, Tokyo 1050033, Japan
[3] Japan Sci & Technol Agcy, PRESTO, Kawaguchi, Saitama 3320012, Japan
[4] RIKEN, Ctr Adv Integrated Intelligence Res, Tokyo 1030027, Japan
[5] Inst Stat Math, Tokyo 1908562, Japan
关键词
VARIABLE SELECTION; BREAST-CANCER; REGRESSION; REGULARIZATION; PREDICTION; SPARSITY; RECOVERY; TUMOR;
D O I
10.1162/neco_a_01279
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse regularization such as l(1) regularization is a quite powerful and widely used strategy for high-dimensional learning problems. The effectiveness of sparse regularization has been supported practically and theoretically by several studies. However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features. Ordinary l(1) regularization selects variables correlated with each other under weak regularizations, which results in deterioration of not only its estimation error but also interpretability. In this letter, we propose a new regularization method, independently interpretable lasso (IILasso), for generalized linear models. Our proposed regularizer suppresses selecting correlated variables, so that each active variable affects the response independently in the model. Hence, we can interpret regression coefficients intuitively, and the performance is also improved by avoiding overfitting. We analyze the theoretical property of the IILasso and show that the proposed method is advantageous for its sign recovery and achieves almost minimax optimal convergence rate. Synthetic and real data analyses also indicate the effectiveness of the IILasso.
引用
收藏
页码:1168 / 1221
页数:54
相关论文
共 50 条
  • [41] Linear Models are Most Favorable among Generalized Linear Models
    Lee, Kuan-Yun
    Courtade, Thomas A.
    [J]. 2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2020, : 1213 - 1218
  • [42] Uniformly valid inference based on the Lasso in linear mixed models
    Kramlinger, Peter
    Schneider, Ulrike
    Krivobokova, Tatyana
    [J]. JOURNAL OF MULTIVARIATE ANALYSIS, 2023, 198
  • [43] Hierarchical generalized linear models
    Lee, Y
    Nelder, JA
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-METHODOLOGICAL, 1996, 58 (04): : 619 - 656
  • [44] Holistic Generalized Linear Models
    Schwendinger, Benjamin
    Schwendinger, Florian
    Vana, Laura
    [J]. JOURNAL OF STATISTICAL SOFTWARE, 2024, 108 (07): : 1 - 49
  • [45] A tutorial on generalized linear models
    Myers, RH
    Montgomery, DC
    [J]. JOURNAL OF QUALITY TECHNOLOGY, 1997, 29 (03) : 274 - 291
  • [46] Generalized Weibull Linear Models
    Prudente, Andrea A.
    Cordeiro, Gauss M.
    [J]. COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2010, 39 (20) : 3739 - 3755
  • [47] Sparsifying Generalized Linear Models
    Jambulapati, Arun
    Lee, James R.
    Liu, Yang P.
    Sidford, Aaron
    [J]. PROCEEDINGS OF THE 56TH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING, STOC 2024, 2024, : 1665 - 1675
  • [48] Generalized Linear Rule Models
    Wei, Dennis
    Dash, Sanjeeb
    Gao, Tian
    Gunluk, Oktay
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [49] GENERALIZED LINEAR-MODELS
    HILBE, JM
    [J]. AMERICAN STATISTICIAN, 1994, 48 (03): : 255 - 265
  • [50] Generalized functional linear models
    Müller, HG
    Stadtmüller, U
    [J]. ANNALS OF STATISTICS, 2005, 33 (02): : 774 - 805