A Safe Feature Elimination Rule for L1-Regularized Logistic Regression

被引:12
|
作者
Pan, Xianli [1 ]
Xu, Yitian [1 ]
机构
[1] China Agr Univ, Coll Sci, Beijing 34752, Peoples R China
基金
中国国家自然科学基金;
关键词
Logistic regression; sparsity; safe screening rule; high-dimensional data; LASSO SCREENING RULES; FEATURE-SELECTION;
D O I
10.1109/TPAMI.2021.3071138
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The L-1-regularized logistic regression (L1-LR) is popular for classification problems. To accelerate its training speed for high-dimensional data, techniques named safe screening rules have been proposed recently. They can safely delete the inactive features in data so as to greatly reduce the training cost of L1-LR. The screening power of these rules is determined by their corresponding safe regions, which is also the core technique of safe screening rules. In this paper, we introduce a new safe feature elimination rule (SFER) for L1-LR. Compared to existing safe rules, the safe region of SFER is improved in two aspects: (1) a smaller sphere region is constructed by using the strong convexity of dual L1-LR twice; (2) multiple half-spaces, which correspond to the potential active constraints, are added for further contraction. Both improvements can enhance the screening ability of SFER. As for the complexity of SFER, an iterative filtering framework is given by decomposing the safe region into multiple "domes". In this way, SFER admits a closed form solution and the identified features will not be scanned repeatedly. Experiments on ten benchmark data sets demonstrate that SFER gives superior performance than existing methods on training efficiency.
引用
收藏
页码:4544 / 4554
页数:11
相关论文
共 50 条
  • [1] Sequential safe feature elimination rule for L1-regularized regression with Kullback-Leibler divergence
    Wang, Hongmei
    Jiang, Kun
    Xu, Yitian
    [J]. NEURAL NETWORKS, 2022, 155 : 523 - 535
  • [2] An Improved GLMNET for L1-regularized Logistic Regression
    Yuan, Guo-Xun
    Ho, Chia-Hua
    Lin, Chih-Jen
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 1999 - 2030
  • [3] Distributed Coordinate Descent for L1-regularized Logistic Regression
    Trofimov, Ilya
    Genkin, Alexander
    [J]. ANALYSIS OF IMAGES, SOCIAL NETWORKS AND TEXTS, AIST 2015, 2015, 542 : 243 - 254
  • [4] Multiplicative updates for L1-regularized linear and logistic regression
    Sha, Fei
    Park, Y. Albert
    Saul, Lawrence K.
    [J]. ADVANCES IN INTELLIGENT DATA ANALYSIS VII, PROCEEDINGS, 2007, 4723 : 13 - +
  • [5] Tuning parameter calibration for l1-regularized logistic regression
    Li, Wei
    Lederer, Johannes
    [J]. JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2019, 202 : 80 - 98
  • [6] Introducing l1-regularized Logistic Regression in Markov Networks based EDAs
    Luigi, Malago
    Matteo, Matteucci
    Gabriele, Valentini
    [J]. 2011 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2011, : 1581 - 1588
  • [7] L1-regularized Logistic Regression for Event-driven Stock Market Prediction
    Luo, Si-Shu
    Weng, Yang
    Wang, Wei-Wei
    Hong, Wen-Xing
    [J]. 2017 12TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND EDUCATION (ICCSE 2017), 2017, : 536 - 541
  • [8] A Fast Hybrid Algorithm for Large-Scale l1-Regularized Logistic Regression
    Shi, Jianing
    Yin, Wotao
    Osher, Stanley
    Sajda, Paul
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2010, 11 : 713 - 741
  • [9] HIGH-DIMENSIONAL ISING MODEL SELECTION USING l1-REGULARIZED LOGISTIC REGRESSION
    Ravikumar, Pradeep
    Wainwright, Martin J.
    Lafferty, John D.
    [J]. ANNALS OF STATISTICS, 2010, 38 (03): : 1287 - 1319
  • [10] An interior-point method for large-scale l1-regularized logistic regression
    Koh, Kwangmoo
    Kim, Seung-Jean
    Boyd, Stephen
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2007, 8 : 1519 - 1555