Neural network ensemble training by sequential interaction

被引:0
|
作者
Akhand, M. A. H. [1 ]
Murase, Kazuyuki [1 ]
机构
[1] Univ Fukui, Grad Sch Engn, Fukui, Japan
关键词
bagging; boosting; negative correlation learning; diversity and generalization;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural network ensemble (NNE) has been shown to outperform single neural network (NN) in terms of generalization ability. The performance of NNE is therefore depends on well diversity among component NNs. Popular NNE methods, such as bagging and boosting, follow data sampling technique to achieve diversity. In such methods, NN is trained independently with a particular training set that is probabilistically created. Due to independent training strategy there is a lack of interaction among component NNs. To achieve training time interaction, negative correlation learning (NCL) has been proposed for simultaneous training. NCL demands direct communication among component NNs; which is not possible in bagging and boosting. In this study, first we modify the NCL from simultaneous to sequential style and then induce in bagging and boosting for interaction purpose. Empirical studies exhibited that sequential training time interaction increased diversity among component NNs and outperformed conventional methods in generalization ability.
引用
收藏
页码:98 / +
页数:3
相关论文
共 50 条
  • [1] Training of Neural Network Ensemble through Progressive Interaction
    Akhand, M. A. H.
    Islam, Md. Monirul
    Murase, Kazuyuki
    [J]. 2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 2120 - 2126
  • [2] Progressive interactive training: A sequential neural network ensemble learning method
    Akhand, M. A. H.
    Islam, Md. Monirul
    Murase, K.
    [J]. NEUROCOMPUTING, 2009, 73 (1-3) : 260 - 273
  • [3] A constructive algorithm for training heterogeneous neural network ensemble
    Fu, Xianghua
    Wang, Zhiqiang
    Feng, Boqin
    [J]. ROUGH SETS AND KNOWLEDGE TECHNOLOGY, PROCEEDINGS, 2006, 4062 : 396 - 401
  • [4] Tri-training Based on Neural Network Ensemble Algorithm
    Zhang, Xiaojie
    Bai, Bendu
    Li, Ying
    [J]. INTELLIGENT SCIENCE AND INTELLIGENT DATA ENGINEERING, ISCIDE 2011, 2012, 7202 : 43 - 49
  • [5] Editing training data for kNN classifiers with neural network ensemble
    Jiang, Y
    Zhou, ZH
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2004, PT 1, 2004, 3173 : 356 - 361
  • [6] An Adaptive Sequential Monte Carlo Approach to Neural Network Training
    Zhang, Yiming
    Hu, Bo
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2015, : 1619 - 1623
  • [7] A unified evolutionary training scheme for single and ensemble of feedforward neural network
    Chen, Wen-Ching
    Tseng, Lin-Yu
    Wu, Chih-Sheng
    [J]. NEUROCOMPUTING, 2014, 143 : 347 - 361
  • [8] A hybrid sequential and simultaneous training algorithm for constructing neural network ensembles
    Dept. of Human and Artificial Intelligence Systems, University of Fukui, Fukui, Japan
    不详
    [J]. WSEAS Trans. Inf. Sci. Appl., 2006, 6 (1078-1085):
  • [9] A neural network ensemble method with jittered training data for time series forecasting
    Zhang, G. Peter
    [J]. INFORMATION SCIENCES, 2007, 177 (23) : 5329 - 5346
  • [10] The Diversified Ensemble Neural Network
    Zhang, Shaofeng
    Liu, Meng
    Yan, Junchi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33