Hybrid training of feed-forward neural networks with particle swarm optimization

被引:0
|
作者
Carvalho, M. [1 ]
Ludermir, T. B. [1 ]
机构
[1] Univ Fed Pernambuco, Ctr Informat, BR-50732970 Recife, PE, Brazil
来源
NEURAL INFORMATION PROCESSING, PT 2, PROCEEDINGS | 2006年 / 4233卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training neural networks is a complex task of great importance in problems of supervised learning. The Particle Swarm Optimization (PSO) consists of a stochastic global search originated from the attempt to graphically simulate the social behavior of a flock of birds looking for resources. In this work we analyze the use of the PSO algorithm and two variants with a local search operator for neural network training and investigate the influence of the GL(5) stop criteria in generalization control for swarm optimizers. For evaluating these algorithms we apply them to benchmark classification problems of the medical field. The results showed that the hybrid GCPSO with local search operator had the best results among the particle swarm optimizers in two of the three tested problems.
引用
收藏
页码:1061 / 1070
页数:10
相关论文
共 50 条
  • [32] Vortex search optimization algorithm for training of feed-forward neural network
    Tahir Sağ
    Zainab Abdullah Jalil Jalil
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 1517 - 1544
  • [33] Vortex search optimization algorithm for training of feed-forward neural network
    Sag, Tahir
    Jalil, Zainab Abdullah Jalil
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (05) : 1517 - 1544
  • [34] An ensemble of differential evolution and Adam for training feed-forward neural networks
    Xue, Yu
    Tong, Yiling
    Neri, Ferrante
    INFORMATION SCIENCES, 2022, 608 : 453 - 471
  • [35] Unsupervised, smooth training of feed-forward neural networks for mismatch compensation
    Surendran, AC
    Lee, CH
    Rahim, M
    1997 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, PROCEEDINGS, 1997, : 482 - 489
  • [36] An evolutionary approach to training feed-forward and recurrent neural networks.
    Riley, J
    Ciesielski, VB
    1998 SECOND INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, KES '98, PROCEEDINGS, VOL, 3, 1998, : 596 - 602
  • [37] A training-time analysis of robustness in feed-forward neural networks
    Alippi, C
    Sana, D
    Scotti, F
    2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 2853 - 2858
  • [38] Dynamic group optimisation algorithm for training feed-forward neural networks
    Tang, Rui
    Fong, Simon
    Deb, Suash
    Vasilakos, Athanasios V.
    Millham, Richard C.
    NEUROCOMPUTING, 2018, 314 : 1 - 19
  • [39] A modified hidden weight optimization algorithm for feed-forward neural networks
    Yu, CH
    Manry, MT
    THIRTY-SIXTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS - CONFERENCE RECORD, VOLS 1 AND 2, CONFERENCE RECORD, 2002, : 1034 - 1038
  • [40] Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks
    Aguiar, Manuela A. D.
    Dias, Ana Paula S.
    Ferreira, Flora
    CHAOS, 2017, 27 (01)