Design of RBF neural network based on SAPSO algorithm

被引:0
|
作者
Zhang W. [1 ]
Huang W.-M. [1 ]
机构
[1] College of Electrical Engineering and Automation, Henan Polytechnic University, Jiaozuo
来源
Kongzhi yu Juece/Control and Decision | 2021年 / 36卷 / 09期
关键词
Convergence several times; Dynamic optimization; Particle swarm optimization; RBF neural network; Sensitivity analysis;
D O I
10.13195/j.kzyjc.2020.0176
中图分类号
学科分类号
摘要
Aiming at the dynamic optimization of structure and parameters of the radial basis function (RBF) neural network, an optimization algorithm based on sensitivity analysis (SA) and particle swarm optimization (PSO) for the RBF neural network (SAPSO-RBF) is proposed. Firstly, the number of particle information is randomly initialized, and the particle information is added and deleted by the sensitivity analysis in the learning phase, and the network structure of the algorithm in first convergence is determined. Then, after the algorithm reaches convergence, we analyzes the sensitivity of the optimal particles, deletes the redundant information, and makes the algorithm re-divergent. An inertia weight update method is proposed to make the algorithm perform multiple divergence and convergence in the solution space, which enhances algorithm search ability while reducing network structure, and the convergence of SAPSO algorithm is proved. Finally, the results of experiments show that the proposed SAPSO-RBF algorithm has good self-organizing ability and has greatly improved the network structure compactness and accuracy compared with some other existing methods. © 2021, Editorial Office of Control and Decision. All right reserved.
引用
收藏
页码:2305 / 2312
页数:7
相关论文
共 20 条
  • [1] Yang G, Wang L, Dai L Z, Et al., AQPSO-based self-organization learning of RBF neural network, Control and Decision, 33, 9, pp. 1631-1636, (2018)
  • [2] Huang G B, Saratchandran P, Sundararajan N., An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks, IEEE Transactions on Systems, Man, and Cybernetics, Part B, 34, 6, pp. 2284-2292, (2004)
  • [3] Feng H M., Self-generation RBFNs using evolutional PSO learning, Neurocomputing, 70, 1, pp. 241-251, (2006)
  • [4] Semenov M A, Terkel D A., Analysis of convergence of an evolutionary algorithm with self-adaptation using a stochastic Lyapunov function, Evolutionary Computation, 11, 4, pp. 363-379, (2003)
  • [5] Huang G B, Saratchandran P, Sundararajan N., A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation, IEEE Transactions on Neural Networks, 16, 1, pp. 57-67, (2005)
  • [6] Han H G, Guo Y N, Qiao J F., Self-organization of a recurrent RBF neural network using an information-oriented algorithm, Neurocomputing, 225, pp. 80-91, (2017)
  • [7] Han H G, Zhou W D, Qiao J F, Et al., A direct self-constructing neural controller design for a class of nonlinear systems, IEEE Transactions on Neural Networks and Learning Systems, 26, 6, pp. 1312-1322, (2015)
  • [8] Li F, Yang C L, Qiao J F., A novel RBF neural network design based on immune algorithm system, Proccedings of the 36th Chinese Control Conference, pp. 4598-4603, (2017)
  • [9] Zhang L, Li K, He H B, Et al., A new discrete-continuous algorithm for radial basis function networks construction, IEEE Transactions on Neural Network and Learning Systems, 24, 11, pp. 1785-1798, (2013)
  • [10] Xie T T, Yu H, Hewlett J, Et al., Fast and efficient second-order method for training radial basis function networks, IEEE Transactions on Neural Networks and Learning Systems, 23, 4, pp. 609-619, (2012)