A method for estimating the number of hidden neurons in feed-forward neural networks based on information entropy

被引:55
|
作者
Yuan, HC
Xiong, FL
Huai, XY
机构
[1] Univ Sci & Technol China, Dept Automat, Hefei 230026, Peoples R China
[2] Chinese Acad Sci, Inst Intelligent Machines, Hefei 230031, Peoples R China
关键词
neural network; information entropy; architecture selection; hidden neurons;
D O I
10.1016/S0168-1699(03)00011-5
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
The number of hidden neurons of feed-forward neural networks is generally decided on the basis of experience. The method usually results ill the lack or redundancy of hidden neurons, and causes the shortage of capacity for storing information or learning overmuch. This research proposes a new method for optimizing the number of hidden neurons based on information entropy. Firstly, an initial neural network with enough hidden neurons should be trained by a set of training samples. Secondly, the activation values of hidden neurons should be calculated by inputting the training samples that can be identified correctly by the trained neural network. Thirdly, all kinds of partitions should be tried and its information gain should be calculated, and then a decision tree for correctly dividing the whole sample space call be constructed. Finally, the important and related hidden neurons that are included in the tree can be found by searching the whole tree, and other redundant hidden neurons can be deleted. Thus, the number of hidden neurons can be decided. Taking a neural network with the best number of hidden units for tea quality evaluation as all example, the result shows that the method is effective. (C) 2003 Published by Elsevier B.V.
引用
收藏
页码:57 / 64
页数:8
相关论文
共 50 条
  • [41] Optimizing FPGA implementation of Feed-Forward Neural Networks
    Oniga, S.
    Tisan, A.
    Mic, D.
    Buchman, A.
    Vida-Ratiu, A.
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON OPTIMIZATION OF ELECTRICAL AND ELECTRONIC EQUIPMENT, VOL IV, 2008, : 31 - 36
  • [42] Evapotranspiration estimation using feed-forward neural networks
    Kisi, Ozgur
    NORDIC HYDROLOGY, 2006, 37 (03) : 247 - 260
  • [43] SafetyCage: A misclassification detector for feed-forward neural networks
    Johnsen, Pal Vegard
    Remonato, Filippo
    NORTHERN LIGHTS DEEP LEARNING CONFERENCE, VOL 233, 2024, 233 : 113 - 119
  • [44] FEED-FORWARD NEURAL NETWORKS TO ESTIMATE STOKES PROFILES
    Raygoza-Romero, Joan Manuel
    Nava, Irvin Hussein Lopez
    Ramirez-Velez, Julio Cesar
    REVISTA MEXICANA DE ASTRONOMIA Y ASTROFISICA, 2024, 60 (02) : 343 - 354
  • [45] Invariance priors for Bayesian feed-forward neural networks
    von Toussaint, Udo
    Gori, Silvio
    Dose, Volker
    NEURAL NETWORKS, 2006, 19 (10) : 1550 - 1557
  • [46] An Efficient Hardware Implementation of Feed-Forward Neural Networks
    Tamás Szab#x00F3;
    Gábor Horv#x00E1;th
    Applied Intelligence, 2004, 21 : 143 - 158
  • [47] The errors in simultaneous approximation by feed-forward neural networks
    Xie, Tingfan
    Cao, Feilong
    NEUROCOMPUTING, 2010, 73 (4-6) : 903 - 907
  • [48] Modeling a scrubber using feed-forward neural networks
    Milosavljevic, N
    Heikkilä, P
    TAPPI JOURNAL, 1999, 82 (03): : 197 - 201
  • [49] A numerical verification method for multi-class feed-forward neural networks
    Grimm, Daniel
    Tollner, David
    Kraus, David
    Torok, Arpad
    Sax, Eric
    Szalay, Zsolt
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 247
  • [50] Feed-forward artificial neural networks: Applications to spectroscopy
    Cirovic, DA
    TRAC-TRENDS IN ANALYTICAL CHEMISTRY, 1997, 16 (03) : 148 - 155