Divide-and-conquer learning and modular perceptron networks

被引:0
|
作者
Fu, HC [1 ]
Lee, YP
Chiang, CC
Pao, HT
机构
[1] Natl Chiao Tung Univ, Dept Comp Sci & Informat Engn, Hsinchu 300, Taiwan
[2] Natl Dong Hua Univ, Dept Comp Sci & Informat Engn, Hualien 974, Taiwan
[3] Natl Chiao Tung Univ, Dept Management Sci, Hsinchu 300, Taiwan
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2001年 / 12卷 / 02期
关键词
divide-and-conquer learning; modular perceptron network; multilayer perceptron; weight estimation;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A novel modular perceptron network (MPN) and divide-and-conquer learning (DCL) schemes for the design of modular neural networks are proposed, When a training process in a multilayer perceptron fails into a local minimum or stalls in a flat region, the proposed DCL scheme is applied to divide the current training data region (e.g,, a hard to be learned training set) into two easier (hopely) to be learned regions. The learning process continues when a self-growing perceptron network and its initial weight estimation are constructed for one of the newly partitioned regions, Another partitioned region will resume the training process on the original perceptron network. Data region partitioning, weight estimating and learning are iteratively repeated until all the training data are completely learned by the MPN We have evaluated and compared the proposed MPN with several representative neural networks on the two-spirals problem and real-world dataset, The MPN achieves better weight learning performance by requiring much less data presentations (99.01% similar to 87.86% lesser) during the network training phases, and better generalization performance (4.0% better), and less processing time (2.0% similar to 81.3% lesser) during the retrieving phase, On learning the real-world data, the MPN's show less overfitting compared to single MLP. In addition, due to its self-growing and fast local learning characteristics, the modular network (MPN) can easily adapt to on-line and/or incremental Learning requirements for a rapid changing environment.
引用
收藏
页码:250 / 263
页数:14
相关论文
共 50 条
  • [1] Divide-and-conquer learning and modular perceptron networks
    Fu, H.-C.
    Lee, Y.-P.
    Chiang, C.-C.
    Pao, H.-T.
    [J]. 2001, Institute of Electrical and Electronics Engineers Inc. (12):
  • [2] PRUNING DIVIDE-AND-CONQUER NETWORKS
    ROMANIUK, SG
    [J]. NETWORK-COMPUTATION IN NEURAL SYSTEMS, 1993, 4 (04) : 481 - 494
  • [3] DIVIDE-AND-CONQUER NEURAL NETWORKS
    ROMANIUK, SG
    HALL, LO
    [J]. NEURAL NETWORKS, 1993, 6 (08) : 1105 - 1116
  • [4] Modular Divide-and-Conquer Parallelization of Nested Loops
    Farzan, Azadeh
    Nicolet, Victor
    [J]. PROCEEDINGS OF THE 40TH ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '19), 2019, : 610 - 624
  • [5] Divide-and-conquer Tournament on Social Networks
    Jiasheng Wang
    Yichao Zhang
    Jihong Guan
    Shuigeng Zhou
    [J]. Scientific Reports, 7
  • [6] A Divide-and-Conquer Learning Approach to Radial Basis Function Networks
    YIU-MING CHEUNG
    RONG-BO HUANG
    [J]. Neural Processing Letters, 2005, 21 : 189 - 206
  • [7] Divide-and-conquer Tournament on Social Networks
    Wang, Jiasheng
    Zhang, Yichao
    Guan, Jihong
    Zhou, Shuigeng
    [J]. SCIENTIFIC REPORTS, 2017, 7
  • [8] A divide-and-conquer learning approach to radial basis function networks
    Cheung, YM
    Huang, RB
    [J]. NEURAL PROCESSING LETTERS, 2005, 21 (03) : 189 - 206
  • [9] DIVIDE-AND-CONQUER
    JEFFRIES, T
    [J]. BYTE, 1993, 18 (03): : 187 - &
  • [10] DIVIDE-AND-CONQUER
    SAWYER, P
    [J]. CHEMICAL ENGINEER-LONDON, 1990, (484): : 36 - 38