Research Progress on Stochastic Configuration Network

被引:0
|
作者
Zhang C.-L. [1 ]
Ding S.-F. [1 ,2 ]
Guo L.-L. [1 ,2 ]
Zhang J. [1 ,2 ]
机构
[1] School of Computer Science and Technology, China University of Mining and Technology, Xuzhou
[2] Engineering Research Center of Mine Digitization, China University of Mining and Technology, Ministry of Education, Xuzhou
来源
Ruan Jian Xue Bao/Journal of Software | 2024年 / 35卷 / 05期
关键词
deep learning; neural network; randomized learning; research progress; stochastic configuration network (SCN);
D O I
10.13328/j.cnki.jos.006804
中图分类号
学科分类号
摘要
Stochastic configuration network (SCN), as an emerging incremental neural network model, is different from other randomized neural network methods. It can configure the parameters of hidden layer nodes through supervision mechanisms, thereby ensuring the fast convergence performance of SCN. Due to the advantages of high learning efficiency, low human intervention, and strong generalization ability, SCN has attracted a large number of national and international scholars and developed rapidly since it was proposed in 2017. In this study, SCN research is summarized from the aspects of basic theories, typical algorithm variants, application fields, and future research directions of SCN. Firstly, the algorithm principles, universal approximation capacity, and advantages of SCN are analyzed theoretically. Secondly, typical variants of SCN are studied, such as DeepSCN, 2DSCN, Robust SCN, Ensemble SCN, Distributed SCN, Parallel SCN, and Regularized SCN. Then, the applications of SCN in different fields, including hardware implementation, computer vision, medical data analysis, fault detection and diagnosis, and system modeling and prediction are introduced. Finally, the development potential of SCN in convolutional neural network architectures, semi-supervised learning, unsupervised learning, multi-view learning, fuzzy neural network, and recurrent neural network is pointed out. © 2024 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:2379 / 2399
页数:20
相关论文
共 120 条
  • [1] Ding SF, Xu X, Wang YR., Optimized density peaks clustering algorithm based on dissimilarity measure, Ruan Jian Xue Bao/Journal of Software, 31, 11, pp. 3321-3333, (2020)
  • [2] Zhang J, Ding SF, Ding L, Zhang CL., Deep generative neural networks based on real-valued RBM with auxiliary hidden units, Ruan Jian Xue Bao/Journal of Software, 32, 12, pp. 3802-3813, (2021)
  • [3] Cybenko G., Approximation by superpositions of a sigmoidal function, Mathematics of Control, Signals and Systems, 2, 4, pp. 303-314, (1989)
  • [4] Hornik K, Stinchcombe M, White H., Multilayer feedforward networks are universal approximators, Neural Networks, 2, 5, pp. 359-366, (1989)
  • [5] Park J, Sandberg IW., Universal approximation using radial-basis-function networks, Neural Computation, 3, 2, pp. 246-257, (1991)
  • [6] Chen TP, Chen H., Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems, IEEE Trans. on Neural Networks, 6, 4, pp. 911-917, (1995)
  • [7] LeCun Y, Bengio Y, Hinton G., Deep learning, Nature, 521, 7553, pp. 436-444, (2015)
  • [8] Deng L, Yu D., Deep learning: Methods and applications, in Signal Processing, 7, 3–4, pp. 197-387, (2014)
  • [9] Hinton GE, Osindero S, Teh YW., A fast learning algorithm for deep belief nets, Neural Computation, 18, 7, pp. 1527-1554, (2006)
  • [10] Hinton GE, Salakhutdinov RR., Reducing the dimensionality of data with neural networks, Science, 313, 5786, pp. 504-507, (2006)