Universal Perceptron and DNA-Like Learning Algorithm for Binary Neural Networks: LSBF and PBF Implementations

被引:25
|
作者
Chen, Fangyue [1 ]
Chen, Guanrong [2 ]
He, Guolong [3 ]
Xu, Xiubin [3 ]
He, Qinbin [4 ]
机构
[1] Hangzhou Dianzi Univ, Sch Sci, Hangzhou 310018, Zhejiang, Peoples R China
[2] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Hong Kong, Peoples R China
[3] Zhejiang Normal Univ, Dept Math, Jinhua 321004, Zhejiang, Peoples R China
[4] Taizhou Univ, Dept Math, Linhai 317000, Zhejiang, Peoples R China
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2009年 / 20卷 / 10期
基金
美国国家科学基金会;
关键词
Cellular neural network (CNN); DNA-like learning algorithm; linearly separable Boolean function (LSBF); multilayer perceptron (MLP); nonlinearly separable degree (NLSD); parity Boolean function (PBF); single-layer perceptron (SLP); universal perceptron (UP); BIT PARITY PROBLEM; TRADEOFFS; CNN;
D O I
10.1109/TNN.2009.2028886
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Universal perceptron (UP), a generalization of Rosenblatt's perceptron, is considered in this paper, which is capable of implementing all Boolean functions (BFs). In the classification of BFs, there are: 1) linearly separable Boolean function (LSBF) class, 2) parity Boolean function (PBF) class, and 3) non-LSBF and non-PBF class. To implement these functions, UP takes different kinds of simple topological structures in which each contains at most one hidden layer along with the smallest possible number of hidden neurons. Inspired by the concept of DNA sequences in biological systems, a novel learning algorithm named DNA-like learning is developed, which is able to quickly train a network with any prescribed BF. The focus is on performing LSBF and PBF by a single-layer perceptron (SLP) with the new algorithm. Two criteria for LSBF and PBF are proposed, respectively, and a new measure for a BF, named nonlinearly separable degree (NLSD), is introduced. In the sense of this measure, the PBF is the most complex one. The new algorithm has many advantages including, in particular, fast running speed, good robustness, and no need of considering the convergence property. For example, the number of iterations and computations in implementing the basic 2-bit logic operations such as AND, OR, and XOR by using the new algorithm is far smaller than the ones needed by using other existing algorithms such as error-correction (EC) and backpropagation (BP) algorithms. Moreover, the synaptic weights and threshold values derived from UP can be directly used in designing of the template of cellular neural networks (CNNs), which has been considered as a new spatial-temporal sensory computing paradigm.
引用
收藏
页码:1645 / 1658
页数:14
相关论文
共 16 条
  • [1] Universal Perceptron and DNA-Like Learning Algorithm for Binary Neural Networks: Non-LSBF Implementation
    Chen, Fangyue
    Chen, Guanrong
    He, Qinbin
    He, Guolong
    Xu, Xiubin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (08): : 1293 - 1301
  • [2] LEARNING TIMES OF NEURAL NETWORKS - EXACT SOLUTION FOR A PERCEPTRON ALGORITHM
    OPPER, M
    [J]. PHYSICAL REVIEW A, 1988, 38 (07) : 3824 - 3826
  • [3] DNA-Like Learning Algorithm of CNN Template Implementing Boolean Functions
    Chen, Fangyue
    Chen, Guanrong
    He, Qinbin
    [J]. ISCAS: 2009 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-5, 2009, : 2701 - +
  • [4] The research on learning algorithm of binary neural networks
    Hua, Q
    Zheng, QL
    [J]. PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 541 - 546
  • [5] A flexible learning algorithm for binary neural networks
    Yamamoto, A
    Saito, T
    [J]. IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 1998, E81A (09): : 1925 - 1930
  • [6] PERCEPTRON-LIKE LEARNING IN TIME-SUMMATING NEURAL NETWORKS
    BRESSLOFF, PC
    TAYLOR, JG
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1992, 25 (16): : 4373 - 4388
  • [7] Privacy-Preserving Protocols for Perceptron Learning Algorithm in Neural Networks
    Samet, Saeed
    Miri, Ali
    [J]. 2008 4TH INTERNATIONAL IEEE CONFERENCE INTELLIGENT SYSTEMS, VOLS 1 AND 2, 2008, : 459 - 464
  • [8] ANEFFICIENT LEARNING ALGORITHM FOR BINARY FEEDFORWARD NEURAL NETWORKS
    Zhou, Jianxin
    Zeng, Xiaoqin
    Chan, Patrick P. K.
    [J]. PROCEEDINGS OF 2015 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOL. 2, 2015, : 609 - 615
  • [9] An efficient learning algorithm for binary feedforward neural networks
    Zeng X.
    Zhou J.
    Zheng X.
    Zhong S.
    [J]. Zhou, Jianxin (zhoujx0219@163.com), 2016, Harbin Institute of Technology (48): : 148 - 154
  • [10] An exact learning algorithm for autoassociative neural networks with binary couplings
    Milde, G
    Kobe, S
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1997, 30 (07): : 2349 - 2352