Independently Interpretable Lasso for Generalized Linear Models

被引:1
|
作者
Takada, Masaaki [1 ]
Suzuki, Taiji [2 ,3 ,4 ]
Fujisawa, Hironori [1 ,4 ,5 ]
机构
[1] SOKENDAI, Grad Univ Adv Studies, Tokyo 1908562, Japan
[2] Univ Tokyo, Tokyo 1050033, Japan
[3] Japan Sci & Technol Agcy, PRESTO, Kawaguchi, Saitama 3320012, Japan
[4] RIKEN, Ctr Adv Integrated Intelligence Res, Tokyo 1030027, Japan
[5] Inst Stat Math, Tokyo 1908562, Japan
关键词
VARIABLE SELECTION; BREAST-CANCER; REGRESSION; REGULARIZATION; PREDICTION; SPARSITY; RECOVERY; TUMOR;
D O I
10.1162/neco_a_01279
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse regularization such as l(1) regularization is a quite powerful and widely used strategy for high-dimensional learning problems. The effectiveness of sparse regularization has been supported practically and theoretically by several studies. However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features. Ordinary l(1) regularization selects variables correlated with each other under weak regularizations, which results in deterioration of not only its estimation error but also interpretability. In this letter, we propose a new regularization method, independently interpretable lasso (IILasso), for generalized linear models. Our proposed regularizer suppresses selecting correlated variables, so that each active variable affects the response independently in the model. Hence, we can interpret regression coefficients intuitively, and the performance is also improved by avoiding overfitting. We analyze the theoretical property of the IILasso and show that the proposed method is advantageous for its sign recovery and achieves almost minimax optimal convergence rate. Synthetic and real data analyses also indicate the effectiveness of the IILasso.
引用
收藏
页码:1168 / 1221
页数:54
相关论文
共 50 条
  • [21] Interpretable Ranking with Generalized Additive Models
    Zhuang, Honglei
    Wang, Xuanhui
    Bendersky, Michael
    Grushetsky, Alexander
    Wu, Yonghui
    Mitrichev, Petr
    Sterling, Ethan
    Bell, Nathan
    Ravina, Walker
    Qian, Hai
    [J]. WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, : 499 - 507
  • [22] The Generalized Lasso With Non-Linear Observations
    Plan, Yaniv
    Vershynin, Roman
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2016, 62 (03) : 1528 - 1537
  • [23] The robust desparsified lasso and the focused information criterion for high-dimensional generalized linear models
    Pandhare, S. C.
    Ramanathan, T. V.
    [J]. STATISTICS, 2023, 57 (01) : 1 - 25
  • [24] Use generalized linear models or generalized partially linear models?
    Li, Xinmin
    Liang, Haozhe
    Haerdle, Wolfgang
    Liang, Hua
    [J]. STATISTICS AND COMPUTING, 2023, 33 (05)
  • [25] Use generalized linear models or generalized partially linear models?
    Xinmin Li
    Haozhe Liang
    Wolfgang Härdle
    Hua Liang
    [J]. Statistics and Computing, 2023, 33
  • [26] Identifying Pediatric Cancer Clusters in Florida Using Log-Linear Models and Generalized Lasso Penalties
    Wang, Hao
    Rodriguez, Abel
    [J]. STATISTICS AND PUBLIC POLICY, 2014, 1 (01): : 86 - 96
  • [27] Debiased lasso after sample splitting for estimation and inference in high-dimensional generalized linear models
    Vazquez, Omar
    Nan, Bin
    [J]. CANADIAN JOURNAL OF STATISTICS-REVUE CANADIENNE DE STATISTIQUE, 2024,
  • [28] Linear graphlet models for accurate and interpretable cheminformatics
    Tynes, Michael
    Taylor, Michael G.
    Janssen, Jan
    Burrill, Daniel J.
    Perez, Danny
    Yang, Ping
    Lubbers, Nicholas
    [J]. DIGITAL DISCOVERY, 2024, 3 (10): : 1980 - 1996
  • [29] Generalized linear models
    Neuhaus, John
    McCulloch, Charles
    [J]. WILEY INTERDISCIPLINARY REVIEWS-COMPUTATIONAL STATISTICS, 2011, 3 (05): : 407 - 413
  • [30] Generalized linear models
    McCulloch, CE
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2000, 95 (452) : 1320 - 1324