Solving flat-spot problem in back-propagation learning algorithm based on magnified error

被引:0
|
作者
Yang, B [1 ]
Wang, YD [1 ]
Su, XH [1 ]
Wang, LJ [1 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Peoples R China
关键词
back propagation; artificial neural network; magnifying error coefficient (MEC); elite-preservation strategy (EPS); flat-spot problem;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A new learning algorithm based on magnified Error is proposed to speedup the training of back-propagation neural networks, and to improve the performances of neural network. The key to this algorithm lies in varying the error item of output layer, which magnify the backward propagated error signal especially when the weight adjustment of output layer is slow or even suppressed. Therefore, the algorithm is able to get rid of the influence of "flat spot" problem, and solve the slow convergence problem. Consequently the convergence rate can be accelerated, and the training has great capability in meeting the convergence criteria quickly with a simple network structure. Experiments on parity-3 problem and soybean data classification problem show that this method has advantages of faster learning speed and less computational cost than most of the improved algorithms such as sigmoid-prime offset technique (SPO), scaled linear approximation of sigmoid method (SLA) and so on.
引用
收藏
页码:1784 / 1788
页数:5
相关论文
共 50 条