Convergence analysis of a segmentation algorithm for the evolutionary training of neural networks

被引:0
|
作者
Hüning, H [1 ]
机构
[1] Univ London Imperial Coll Sci Technol & Med, Dept Elect & Elect Engn, London SW7 2BT, England
关键词
D O I
10.1109/ECNN.2000.886222
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In contrast to standard genetic algorithms with generational reproduction, we adopt the viewpoint of the reactor algorithm (Dittrich & Banzhaf, 1998). which is similar to steady-state genetic algorithms, but without ranking. This permits an analysis similar to Eigen's (1971) molecular evolution model. From this viewpoint, we consider combining segments from different populations into one genotype at every time-step, which can be regarded as many-parent combinations with fined crossover points, and is comparable to cooperative evolution (Potter & De Jong, 2000). We present fired-point analysis and phase portraits of the competitive dynamics, with the result that only the first-order (single parent) replicators exhibit global optimisation. A segmentation algorithm is developed that theoretically ensures convergence to the global optimum while Peeping the cooperative or reactor aspect for a better exploration of the search space. The algorithm creates different population islands for such cases of competition that otherwise cannot be solved correctly by the population dynamics. The population islands have different segmentation boundaries, which are generated by combining wed converged components into new segments. This gives first-order replicators that have the appropriate dynamical properties to compete with new solutions. Furthermore, the population islands communicate information about what solution strings have been found already, so new ones can be favoured. strings that have converged on any island Pre taken as the characterisaton of a fitness peak and disallowed afterwards to enforce diversity. This has been found to be successful in the simulation of an evolving neural network. We also present a simulation example where the genotypes on different islands have converged differently. A perspective from this research is to recombine budding blocks explicitly from the different islands, each time the populations converge.
引用
收藏
页码:70 / 81
页数:12
相关论文
共 50 条
  • [11] Boundedness and convergence analysis of weight elimination for cyclic training of neural networks
    Wang, Jian
    Ye, Zhenyun
    Gao, Weifeng
    Zurada, Jacek M.
    NEURAL NETWORKS, 2016, 82 : 49 - 61
  • [12] A neural networks learning algorithm for minor component analysis and its convergence analysis
    Peng, Dezhong
    Yi, Zhang
    Lv, JianCheng
    Xiang, Yong
    NEUROCOMPUTING, 2008, 71 (7-9) : 1748 - 1752
  • [13] Convergence of Adversarial Training in Overparametrized Neural Networks
    Gao, Ruiqi
    Cai, Tianle
    Li, Haochuan
    Wang, Liwei
    Hsieh, Cho-Jui
    Lee, Jason D.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [14] On the Convergence Rate of Training Recurrent Neural Networks
    Allen-Zhu, Zeyuan
    Li, Yuanzhi
    Song, Zhao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [15] Evolutionary Design and Training of Artificial Neural Networks
    Kojecky, Lumir
    Zelinka, Ivan
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 427 - 437
  • [16] An evolutionary building algorithm for Deep Neural Networks
    Zemouri, Ryad
    2017 12TH INTERNATIONAL WORKSHOP ON SELF-ORGANIZING MAPS AND LEARNING VECTOR QUANTIZATION, CLUSTERING AND DATA VISUALIZATION (WSOM), 2017, : 21 - 27
  • [17] AN EVOLUTIONARY ALGORITHM THAT CONSTRUCTS RECURRENT NEURAL NETWORKS
    ANGELINE, PJ
    SAUNDERS, GM
    POLLACK, JB
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (01): : 54 - 65
  • [18] A fast training algorithm for neural networks
    Bilski, J
    Rutkowski, L
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-ANALOG AND DIGITAL SIGNAL PROCESSING, 1998, 45 (06): : 749 - 753
  • [19] Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
    Jentzen, Arnulf
    Riekert, Adrian
    JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, 2023, 517 (02)
  • [20] CONVERGENCE ANALYSIS OF THE WEIGHTED STATE SPACE SEARCH ALGORITHM FOR RECURRENT NEURAL NETWORKS
    Li, Leong-Kwan
    Shao, Sally
    NUMERICAL ALGEBRA CONTROL AND OPTIMIZATION, 2014, 4 (03): : 193 - 207