Discovery of Optimal Neurons and Hidden Layers in Feed-Forward Neural Network

被引:0
|
作者
Thomas, Likewin [1 ]
Kumar, Manoj M., V [1 ]
Annappa, B. [1 ]
机构
[1] NITK, Dept CSE, Mangaluru, Karnataka, India
关键词
Self-organizing neural network: cognitron; Feed-forward neural network; Neurons; Hidden layers; ADALINE Gradient decent; Master-slave model;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Identifying the number of neurons in each hidden layers and number of hidden layers in a multi layered Artificial Neural Network (ANN) is a challenge based on the input data. A new hypothesis is proposed for organizing the synapse from x to y neuron. The synapse of number of neurons to fire between the hidden layer is identified. By the introduction of this hypothesis, an effective number of neurons in multilayered Artificial Neural Network can be identified and self organizing neural network model is developed which is referred as cognitron. The normal brain model has 3 layered perceptron; but the proposed model organizes the number of layers optimal for identifying an effective model. Our result proved that the proposed model constructs a neural model directly by identifying the optimal weights of each neurons and number of neurons in each dynamically identified hidden layers. This optimized model is self organized with different range of neurons on different layer of hidden layer, and by comparing the performance based on computational time and error at each iteration. An efficient number of neurons are organized using gradient decent. The proposed model thus train large model to perform the classification task by inserting optimal layers and neurons.
引用
收藏
页码:286 / 291
页数:6
相关论文
共 50 条
  • [41] Feed-forward neural network training using sparse representation
    Yang, Jie
    Ma, Jun
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 116 : 255 - 264
  • [42] Lp error estimate of approximation by a feed-forward neural network
    Zhao Jian-wei
    Cao Fei-long
    2009 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND COMPUTATIONAL INTELLIGENCE, VOL I, PROCEEDINGS, 2009, : 161 - 164
  • [43] Learning in the feed-forward random neural network: A critical review
    Georgiopoulos, Michael
    Li, Cong
    Kocak, Taskin
    PERFORMANCE EVALUATION, 2011, 68 (04) : 361 - 384
  • [44] COINCIDENT PEAK PREDICTION USING A FEED-FORWARD NEURAL NETWORK
    Dowling, Chase P.
    Kirschen, Daniel
    Zhang, Baosen
    2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 912 - 916
  • [45] A feed-forward neural network for robust segmentation of color images
    Amoroso, C
    Chella, A
    Morreale, V
    Storniolo, P
    NEURAL NETS - WIRN VIETRI-99, 1999, : 159 - 164
  • [46] Implementation of a Feed-forward Artificial Neural Network in VHDL on FPGA
    Dondon, Philippe
    Carvalho, Julien
    Gardere, Remi
    Lahalle, Paul
    Tsenov, Georgi
    Mladenov, Valeri
    2014 12TH SYMPOSIUM ON NEURAL NETWORK APPLICATIONS IN ELECTRICAL ENGINEERING (NEUREL), 2014, : 37 - 40
  • [47] Boosting feed-forward neural network for internet traffic prediction
    Tong, HH
    Le, CR
    He, JR
    PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2004, : 3129 - 3134
  • [48] A Comparative Analysis of Feed-forward Neural Network & Recurrent Neural Network to Detect Intrusion
    Chowdhury, Nipa
    Kashem, Mohammod Abul
    PROCEEDINGS OF ICECE 2008, VOLS 1 AND 2, 2008, : 488 - 492
  • [49] Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks
    Aguiar, Manuela A. D.
    Dias, Ana Paula S.
    Ferreira, Flora
    CHAOS, 2017, 27 (01)
  • [50] Feed forward neural network with random quaternionic neurons
    Minemoto, Toshifumi
    Isokawa, Teijiro
    Nishimura, Haruhiko
    Matsui, Nobuyuki
    SIGNAL PROCESSING, 2017, 136 : 59 - 68