Training soft margin support vector machines by simulated annealing: A dual approach

被引:20
|
作者
Dantas Dias, Madson L. [1 ]
Rocha Neto, Ajalmar R. [1 ]
机构
[1] Fed Inst Ceara IFCE, Dept Teleinformat, Av Treze Maio 2081, BR-60040215 Fortaleza, Ceara, Brazil
关键词
Support vector machines; Simulated annealing; Learning methods;
D O I
10.1016/j.eswa.2017.06.016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A theoretical advantage of support vector machines (SVM) is the empirical and structural risk minimization which balances the complexity of the model against its success at fitting the training data. Meta heuristics have mostly been used with support vector machines to either tune hyperparameters or to perform feature selection. In this paper, we present a new approach to obtain sparse support vector machines (SVM) based on simulated annealing (SA), named SATE. In our proposal, SA was used to solve the quadratic optimization problem that emerges from support vector machines rather than tune the hyperparameters. We have compared our proposal with sequential minimal optimization (SMO), kernel adatron (KA), a usual QP solver, as well as with recent Particle Swarm Optimization (PSO) and Genetic Algorithms(GA)-based versions. Generally speaking, one can infer that the SATE is equivalent to SMO in terms of accuracy and mean of support vectors and sparser than KA, QP, LPSO, and GA. SATE also has higher accuracies than the GA and PSO-based versions. Moreover, SATE successfully embedded the SVM constraints and provides a competitive classifier while maintaining its simplicity and high sparseness in the solution. (C) 2017 Elsevier Ltd. All rights reserved.
引用
收藏
页码:157 / 169
页数:13
相关论文
共 50 条
  • [21] A role of total margin in Support Vector Machines
    Yoon, M
    Yun, Y
    Nakayama, H
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 2049 - 2053
  • [22] Approximate training of one-class support vector machines using expected margin
    Kang, Seokho
    Kim, Dongil
    Cho, Sungzoon
    COMPUTERS & INDUSTRIAL ENGINEERING, 2019, 130 : 772 - 778
  • [23] Training hard-margin support vector machines using greedy stagewise algorithm
    Bo, Liefeng
    Wang, Ling
    Jiao, Licheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (08): : 1446 - 1455
  • [24] Training Invariant Support Vector Machines
    Dennis Decoste
    Bernhard Schölkopf
    Machine Learning, 2002, 46 : 161 - 190
  • [25] Incremental training of support vector machines
    Shilton, A
    Palaniswami, M
    Ralph, D
    Tsoi, AC
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2005, 16 (01): : 114 - 131
  • [26] Training semiparametric support vector machines
    Mattera, D
    Palmieri, F
    Haykin, S
    NEURAL NETS - WIRN VIETRI-99, 1999, : 272 - 277
  • [27] Training invariant support vector machines
    Decoste, D
    Schölkopf, B
    MACHINE LEARNING, 2002, 46 (1-3) : 161 - 190
  • [28] Dropout Training for Support Vector Machines
    Chen, Ning
    Zhu, Jun
    Chen, Jianfei
    Zhang, Bo
    PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 1752 - 1759
  • [29] Bearing fault diagnosis using simulated annealing algorithm and least squares support vector machines
    Sui, Wentao
    Lu, Changhou
    Wang, Wilson
    Zhang, Dan
    Zhendong Ceshi Yu Zhenduan/Journal of Vibration, Measurement and Diagnosis, 2010, 30 (02): : 119 - 122
  • [30] L2 soft margin support vector machines with a hybrid kernel for pattern recognition
    Yan, Genting
    Ma, Guangfu
    DYNAMICS OF CONTINUOUS DISCRETE AND IMPULSIVE SYSTEMS-SERIES B-APPLICATIONS & ALGORITHMS, 2006, 13E : 985 - 988