Sparse least squares support vector training in the reduced empirical feature space

被引:31
|
作者
Abe, Shigeo [1 ]
机构
[1] Kobe Univ, Grad Sch Sci & Technol, Kobe, Hyogo 657, Japan
关键词
Cholesky factorization; empirical feature space; least squares support vector machines; multi-class support vector machines; pattern classification; RBF kernels; support vector machines;
D O I
10.1007/s10044-007-0062-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we discuss sparse least squares support vector machines (sparse LS SVMs) trained in the empirical feature space, which is spanned by the mapped training data. First, we show that the kernel associated with the empirical feature space gives the same value with that of the kernel associated with the feature space if one of the arguments of the kernels is mapped into the empirical feature space by the mapping function associated with the feature space. Using this fact, we show that training and testing of kernel-based methods can be done in the empirical feature space and that training of LS SVMs in the empirical feature space results in solving a set of linear equations. We then derive the sparse LS SVMs restricting the linearly independent training data in the empirical feature space by the Cholesky factorization. Support vectors correspond to the selected training data and they do not change even if the value of the margin parameter is changed. Thus for linear kernels, the number of support vectors is the number of input variables at most. By computer experiments we show that we can reduce the number of support vectors without deteriorating the generalization ability.
引用
收藏
页码:203 / 214
页数:12
相关论文
共 50 条
  • [1] Sparse least squares support vector training in the reduced empirical feature space
    Shigeo Abe
    [J]. Pattern Analysis and Applications, 2007, 10 : 203 - 214
  • [2] Sparse least squares support vector regressors trained in the reduced empirical feature space
    Abe, Shigeo
    Onishi, Kenta
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2007, PT 2, PROCEEDINGS, 2007, 4669 : 527 - +
  • [3] A Novel Method of Sparse Least Squares Support Vector Machines in Class Empirical Feature Space
    Kitamura, Takuya
    Sekine, Takamasa
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2012, PT II, 2012, 7664 : 475 - 482
  • [4] Sparse Support Vector Machines Trained in the Reduced Empirical Feature Space
    Iwamura, Kazuki
    Abe, Shigeo
    [J]. 2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 2398 - +
  • [5] Training sparse least squares support vector machines by the QR decomposition
    Xia, Xiao-Lei
    [J]. NEURAL NETWORKS, 2018, 106 : 175 - 184
  • [6] A greedy training algorithm for sparse least-squares support vector machines
    Cawley, GC
    Talbot, NLC
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2002, 2002, 2415 : 681 - 686
  • [7] Sparse Lq-norm least squares support vector machine with feature selection
    Shao, Yuan-Hai
    Li, Chun-Na
    Liu, Ming-Zeng
    Wang, Zhen
    Deng, Nai-Yang
    [J]. PATTERN RECOGNITION, 2018, 78 : 167 - 181
  • [8] Algorithm of Sparse Least Squares Support Vector Machine
    Zhang, Yongli
    Zhu, Yanwei
    Lin, Shufei
    Sun, Xiujuan
    Zhang, Qiuna
    Liu, Xiaohong
    [J]. SMART MATERIALS AND INTELLIGENT SYSTEMS, PTS 1 AND 2, 2011, 143-144 : 1229 - +
  • [9] A sparse least squares support vector machine classifier
    Valyon, J
    Horváth, G
    [J]. 2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 543 - 548
  • [10] A Novel Sparse Least Squares Support Vector Machines
    Xia, Xiao-Lei
    Jiao, Weidong
    Li, Kang
    Irwin, George
    [J]. MATHEMATICAL PROBLEMS IN ENGINEERING, 2013, 2013