An efficient learning algorithm for binary feedforward neural networks

被引:0
|
作者
Zeng X. [1 ]
Zhou J. [1 ]
Zheng X. [1 ]
Zhong S. [2 ]
机构
[1] Institute of Intelligence Science and Technology, Computer and Information College, Hohai University, Nanjing
[2] School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing
来源
Zhou, Jianxin (zhoujx0219@163.com) | 2016年 / Harbin Institute of Technology卷 / 48期
关键词
Architecture pruning; Binary feedforward neural network; Classification; Learning algorithm; Sensitivity;
D O I
10.11918/j.issn.0367-6234.2016.05.024
中图分类号
学科分类号
摘要
Focusing on the lack of efficient and practical learning algorithm for Binary Feedforward Neural Networks (BFNN), a novel learning algorithm by fusing the self-adaptations of both architecture and weight for training BFNN is proposed. Based on improving the methodology of Extreme Learning Machines (ELM), the algorithm can effectively train BFNNs with single hidden layer for solving classification problems. In order to satisfy training accuracy, the algorithm can automatically increase hidden neurons and adjust the neuron's weights with the Perceptron Learning Rule. As to improve generalization accuracy, the algorithm can automatically, by establishing binary neuron's sensitivity as a tool for measuring the relevance of each hidden neuron, prune the least relevant hidden neuron with some compensation for information losing due to the pruning. Experiment results verified the feasibility and effectiveness of the proposed algorithm. © 2016, Editorial Board of Journal of Harbin Institute of Technology. All right reserved.
引用
收藏
页码:148 / 154
页数:6
相关论文
共 12 条
  • [1] Zhong S., Zeng X., Liu H., Et al., Approximate computation of Madaline sensitivity based on discrete stochastic technique, Science China: Information Science, 53, 12, pp. 2399-2414, (2010)
  • [2] Rumelhart D.E., Hinton G.E., Williams R.J., Learning representations by back propagation errors, Nature, 323, 9, pp. 533-536, (1986)
  • [3] Rumelhart D.E., Mcclelland J.L., Parallel Distributed Processing: Explorations in the Microstructure of Cognition, (1986)
  • [4] Winter R., Widrow B., Madaline rule II: a training algorithm for neural networks, IEEE International Conference on Neural Networks, pp. 401-408, (1988)
  • [5] Winter R., Madaline rule II: a new method for training networks for BNs, (1989)
  • [6] Huang G.B., Zhu Q., Siew C.K., Extreme learning machine: a new learning scheme of feedforward neural networks, Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 985-990, (2004)
  • [7] Huang G.B., Zhu Q., Siew C.K., Extreme learning machine: theory and applications, Neurocomputing, 70, 1-3, pp. 489-501, (2006)
  • [8] Hornik K., Stinchcombea M., White H., Miltilayer feedforward networks are universal approximators, Neural Networks, 2, 5, pp. 359-366, (1989)
  • [9] Martin T.H., Howard B.D., Mark H.B., Neural Network Design, pp. 24-27, (2002)
  • [10] Reed R., Pruning algorithms-a survey, IEEE Transactions on Neural Networks, 4, 5, pp. 740-747, (1993)