Efficient Leave-One-Out Strategy for Supervised Feature Selection

被引:0
|
作者
Dingcheng Feng [1 ]
Feng Chen [1 ]
Wenli Xu [1 ]
机构
[1] National Laboratory for Information Science and Technology,Tsinghua University
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
leave-one-out; feature selection objectives; evaluation metrics;
D O I
暂无
中图分类号
TP301.6 [算法理论];
学科分类号
081202 ;
摘要
Feature selection is a key task in statistical pattern recognition.Most feature selection algorithms have been proposed based on specific objective functions which are usually intuitively reasonable but can sometimes be far from the more basic objectives of the feature selection.This paper describes how to select features such that the basic objectives,e.g.,classification or clustering accuracies,can be optimized in a more direct way.The analysis requires that the contribution of each feature to the evaluation metrics can be quantitatively described by some score function.Motivated by the conditional independence structure in probabilistic distributions,the analysis uses a leave-one-out feature selection algorithm which provides an approximate solution.The leave-oneout algorithm improves the conventional greedy backward elimination algorithm by preserving more interactions among features in the selection process,so that the various feature selection objectives can be optimized in a unified way.Experiments on six real-world datasets with different feature evaluation metrics have shown that this algorithm outperforms popular feature selection algorithms in most situations.
引用
收藏
页码:629 / 635
页数:7
相关论文
共 50 条
  • [31] Efficient approximate leave-one-out cross-validation for kernel logistic regression
    Gavin C. Cawley
    Nicola L. C. Talbot
    [J]. Machine Learning, 2008, 71 : 243 - 264
  • [32] Feature scaling for kernel fisher discriminant analysis using leave-one-out cross validation
    Bo, LF
    Wang, L
    Jiao, LC
    [J]. NEURAL COMPUTATION, 2006, 18 (04) : 961 - 978
  • [33] Multiple parameter selection for LS-SVM using smooth leave-one-out error
    Bo, LF
    Wang, L
    Jiao, LC
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2005, PT 1, PROCEEDINGS, 2005, 3496 : 851 - 856
  • [34] Selection of Neural Network for Crime Time Series Prediction by Virtual Leave-One-Out Tests
    Jankowski, Stanislaw
    Szymanski, Zbigniew
    Wawrzyniak, Zbigniew
    Cichosz, Pawel
    Szczechla, Eliza
    Pytlak, Radoslaw
    [J]. THEORY AND APPLICATIONS OF TIME SERIES ANALYSIS, 2019, : 117 - 133
  • [35] Exemplar Selection via Leave-One-Out Kernel Averaged Gradient Descent and Subtractive Clustering
    Kokkinos, Yiannis
    Margaritis, Konstantinos G.
    [J]. ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2016, 2016, 475 : 292 - 304
  • [36] Spatial leave-one-out cross-validation for variable selection in the presence of spatial autocorrelation
    Le Rest, Kevin
    Pinaud, David
    Monestiez, Pascal
    Chadoeuf, Joel
    Bretagnolle, Vincent
    [J]. GLOBAL ECOLOGY AND BIOGEOGRAPHY, 2014, 23 (07): : 811 - 820
  • [37] An efficient method for computing leave-one-out error in support vector machines with Gaussian kernels
    Lee, MMS
    Keerthi, SS
    Ong, CJ
    DeCoste, D
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2004, 15 (03): : 750 - 757
  • [38] Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction
    Hao Cheng
    Dorian J. Garrick
    Rohan L. Fernando
    [J]. Journal of Animal Science and Biotechnology, 8
  • [39] LEAVE-ONE-OUT PROCEDURES FOR NONPARAMETRIC ERROR-ESTIMATES
    FUKUNAGA, K
    HUMMELS, DM
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1989, 11 (04) : 421 - 423
  • [40] Estimating the leave-one-out error for support vector regression
    Liu, JX
    Tan, YJ
    [J]. PROCEEDINGS OF THE 2005 INTERNATIONAL CONFERENCE ON NEURAL NETWORKS AND BRAIN, VOLS 1-3, 2005, : 208 - 213