Divide-and-conquer learning and modular perceptron networks

被引:34
|
作者
Fu, H.-C. [1 ]
Lee, Y.-P. [1 ]
Chiang, C.-C. [1 ]
Pao, H.-T. [1 ]
机构
[1] Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan
来源
| 2001年 / Institute of Electrical and Electronics Engineers Inc.卷 / 12期
关键词
Learning algorithms - Learning systems - Mathematical models;
D O I
10.1109/72.914522
中图分类号
学科分类号
摘要
A novel modular perceptron network (MPN) and divide-and-conquer learning (DCL) schemes for the design of modular neural networks are proposed. When a training process in a multilayer perceptron falls into a local minimum or stalls in a flat region, the proposed DCL scheme is applied to divide the current training data region (e.g., a hard to be learned training set) into two easier (hopely) to be learned regions. The learning process continues when a self-growing perceptron network and its initial weight estimation are constructed for one of the newly partitioned regions. Another partitioned region will resume the training process on the original perceptron network. Data region partitioning, weight estimating and learning are iteratively repeated until all the training data are completely learned by the MPN. We have evaluated and compared the proposed MPN with several representative neural networks on the two-spirals problem and real-world dataset. The MPN achieves better weight learning performance by requiring much less data presentations (99.01% ∼ 87.86% lesser) during the network training phases, and better generalization performance (4.0% better), and less processing time (2.0% ∼ 81.3% lesser) during the retrieving phase. On learning the real-world data, the MPN's show less overfitting compared to single MLP. In addition, due to its self-growing and fast local learning characteristics, the modular network (MPN) can easily adapt to on-line and/or incremental learning requirements for a rapid changing environment.
引用
收藏
相关论文
共 50 条
  • [1] Divide-and-conquer learning and modular perceptron networks
    Fu, HC
    Lee, YP
    Chiang, CC
    Pao, HT
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2001, 12 (02): : 250 - 263
  • [2] PRUNING DIVIDE-AND-CONQUER NETWORKS
    ROMANIUK, SG
    NETWORK-COMPUTATION IN NEURAL SYSTEMS, 1993, 4 (04) : 481 - 494
  • [3] DIVIDE-AND-CONQUER NEURAL NETWORKS
    ROMANIUK, SG
    HALL, LO
    NEURAL NETWORKS, 1993, 6 (08) : 1105 - 1116
  • [4] Modular Divide-and-Conquer Parallelization of Nested Loops
    Farzan, Azadeh
    Nicolet, Victor
    PROCEEDINGS OF THE 40TH ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '19), 2019, : 610 - 624
  • [5] Divide-and-conquer Tournament on Social Networks
    Jiasheng Wang
    Yichao Zhang
    Jihong Guan
    Shuigeng Zhou
    Scientific Reports, 7
  • [6] A Divide-and-Conquer Learning Approach to Radial Basis Function Networks
    YIU-MING CHEUNG
    RONG-BO HUANG
    Neural Processing Letters, 2005, 21 : 189 - 206
  • [7] Divide-and-conquer Tournament on Social Networks
    Wang, Jiasheng
    Zhang, Yichao
    Guan, Jihong
    Zhou, Shuigeng
    SCIENTIFIC REPORTS, 2017, 7
  • [8] A divide-and-conquer learning approach to radial basis function networks
    Cheung, YM
    Huang, RB
    NEURAL PROCESSING LETTERS, 2005, 21 (03) : 189 - 206
  • [9] DIVIDE-AND-CONQUER
    JEFFRIES, T
    BYTE, 1993, 18 (03): : 187 - &
  • [10] DIVIDE-AND-CONQUER
    SAWYER, P
    CHEMICAL ENGINEER-LONDON, 1990, (484): : 36 - 38