Comparing support vector machines and feedforward neural networks with similar hidden-layer weights

被引:26
|
作者
Romero, Enrique [1 ]
Toppo, Daniel
机构
[1] Univ Politecn Catalunya, Dept Llenguatges & Sist Informat, Barcelona 08034, Spain
[2] Swiss Fed Inst Technol, I&C Sch Comp & Commun Sci, CH-1015 Lausanne, Switzerland
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2007年 / 18卷 / 03期
关键词
feedforward neural networks (FNNs); sparse models; support vector machines (SVMs);
D O I
10.1109/TNN.2007.891656
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Support vector machines (SVMs) usually need a large number of support vectors to form their output. Recently, several models have been proposed. to build SVMs with a small number of basis functions, maintaining the property that their hidden-layer weights are a subset of the data (the support vectors). This property is also present in some algorithms for feedforward neural networks (FNNs) that construct the network sequentially, leading to sparse models where the number of hidden units can be explicitly controlled. An experimental study on several benchmark data sets, comparing SVMs and the aforementioned sequential FNNs, was carried out. The experiments were performed in the same conditions for all the models, and they can be seen as a comparison of SVMs and FNNs when both models are restricted to use similar hidden-layer weights. Accuracies were found to be very similar. Regarding the number of support vectors, sequential FNNs constructed models with less hidden units than standard SVMs and in the same range as "sparse" SVMs. Computational times were lower for SVMs.
引用
收藏
页码:959 / 963
页数:5
相关论文
共 50 条
  • [1] DNA microarray classification with compact single hidden-layer feedforward neural networks
    Huynh, Hieu Trung
    Kim, Jung-Ja
    Won, Yonggwan
    [J]. PROCEEDINGS OF THE FRONTIERS IN THE CONVERGENCE OF BIOSCIENCE AND INFORMATION TECHNOLOGIES, 2007, : 193 - +
  • [2] Benchmarking the Selection of the Hidden-layer Weights in Extreme Learning Machines
    Romero, Enrique
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 1607 - 1614
  • [3] On the approximation by single hidden layer feedforward neural networks with fixed weights
    Guliyev, Namig J.
    Ismailov, Vugar E.
    [J]. NEURAL NETWORKS, 2018, 98 : 296 - 304
  • [4] Comparing support vector machines and feed-forward neural networks with similar parameters
    Romero, Enrique
    Toppo, Daniel
    [J]. INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2006, PROCEEDINGS, 2006, 4224 : 90 - 98
  • [5] Online training for single hidden-layer feedforward neural networks using RLS-ELM
    Hieu Trung Huynh
    Won, Yonggwan
    [J]. IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN ROBOTICS AND AUTOMATION, 2009, : 469 - 473
  • [6] Approximation capability of two hidden layer feedforward neural networks with fixed weights
    Guliyev, Namig J.
    Ismailov, Vugar E.
    [J]. NEUROCOMPUTING, 2018, 316 : 262 - 269
  • [7] Decoding Cognitive States from fMRI Data Using Single Hidden-Layer Feedforward Neural Networks
    Huynh, Hieu Trung
    Won, Yonggwan
    [J]. NCM 2008 : 4TH INTERNATIONAL CONFERENCE ON NETWORKED COMPUTING AND ADVANCED INFORMATION MANAGEMENT, VOL 1, PROCEEDINGS, 2008, : 256 - 260
  • [8] DNA Microarray Classification Using Single Hidden-Layer Feedforward Networks Trained by SVD
    Huynh, Hieu Trung
    Kim, Jung-Ja
    Won, Yonggwan
    [J]. BIO-SCIENCE AND BIO-TECHNOLOGY, 2009, 57 : 108 - +
  • [9] On the comparison of random and Hebbian weights for the training of single-hidden layer feedforward neural networks
    Samiee, Kaveh
    Iosifidis, Alexandros
    Gabbouj, Moncef
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2017, 83 : 177 - 186
  • [10] Exploiting Hidden-Layer Responses of Deep Neural Networks for Language Recognition
    Li, Ruizhi
    Mallidi, Sri Harish
    Burget, Lukas
    Plchot, Oldrich
    Dehak, Najim
    [J]. 17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 3265 - 3269