A neural network boosting regression model based on XGBoost

被引:0
|
作者
Dong, Jianwei [1 ]
Chen, Yumin [1 ]
Yao, Bingyu [1 ]
Zhang, Xiao [1 ]
Zeng, Nianfeng [2 ]
机构
[1] Xiamen Univ Technol, Coll Comp & Informat Engn, Xiamen 361024, Fujian, Peoples R China
[2] E Success Informat Technol Co Ltd, Xiamen 361024, Fujian, Peoples R China
关键词
Ensemble learning; Neural networks; Boosting; Regression model; Deep learning;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The boosting model is a kind of ensemble learning technology, including XGBoost and GBDT, which take decision trees as weak classifiers and achieve better results in classification and regression problems. The neural network has an excellent performance on image and voice recognition, but its weak interpretability limits on developing a fusion model. By referring to principles and methods of traditional boosting models, we proposed a Neural Network Boosting (NNBoost) regression, which takes shallow neural networks with simple structures as weak classifiers. The NNBoost is a new ensemble learning method, which obtains low regression errors on several data sets. The target loss function of NNBoost is approximated by the Taylor expansion. By inducing the derivative form of NNBoost, we give a gradient descent algorithm. The structure of deep learning is complex, and there are some problems such as gradient disappearing, weak interpretability, and parameters difficult to be adjusted. We use the integration of simple neural networks to alleviate the gradient vanishing problem which is laborious to be solved in deep learning, and conquer the overfitting of a learning algorithm. Finally, through testing on some experiments, the correctness and effectiveness of NNBoost are verified from multiple angles, the effect of multiple shallow neural network fusion is proved, and the development path of boosting idea and deep learning is widened to a certain extent.(C) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Optimised weighted mean temperature model based on generalised regression neural network
    Li, Junyu
    Hu, Mingyun
    Liu, Lilong
    Yao, Chaolong
    Huang, Liangke
    Zhang, Tengxu
    Zhou, Lv
    Chen, Fade
    [J]. ALL EARTH, 2023, 35 (01): : 344 - 359
  • [32] SCALAR ON NETWORK REGRESSION VIA BOOSTING
    Morris, Emily L.
    He, Kevin
    Kang, Jian
    [J]. ANNALS OF APPLIED STATISTICS, 2022, 16 (04): : 2755 - 2773
  • [33] Evaluation of DME network capability using combination of rule-based model and gradient boosting regression
    Topkova, Tereza
    Pleninger, Stanislav
    Hospodka, Jakub
    Kraus, Jakub
    [J]. JOURNAL OF AIR TRANSPORT MANAGEMENT, 2024, 119
  • [34] An oracle based on the general regression neural network
    Masters, T
    Land, WH
    Maniccam, S
    [J]. 1998 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-5, 1998, : 1615 - 1618
  • [35] A Comparison of Neural Network Model and Regression Model Approaches Based on Sub-functional Components
    Tunalilar, Seckin
    Demirors, Onur
    [J]. SOFTWARE PROCESS AND PRODUCT MEASUREMENT, PROCEEDINGS, 2009, 5891 : 272 - 284
  • [36] Network Intrusion Detection Based on PSO-Xgboost Model
    Jiang, Hui
    He, Zheng
    Ye, Gang
    Zhang, Huyin
    [J]. IEEE ACCESS, 2020, 8 : 58392 - 58401
  • [37] A Novel QoE model based on Boosting Support Vector Regression
    Ben Youssef, Yosr
    Afif, Mariem
    Ksantini, Riadh
    Tabbane, Sami
    [J]. 2018 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2018,
  • [38] A neural network regression model for relative dose computation
    Wu, XG
    Zhu, YP
    [J]. PHYSICS IN MEDICINE AND BIOLOGY, 2000, 45 (04): : 913 - 922
  • [39] A hybrid neural network model for noisy data regression
    Lee, EWM
    Lim, CP
    Yuen, RKK
    Lo, SM
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2004, 34 (02): : 951 - 960
  • [40] A neural network regression model for tropical cyclone forecast
    Liu, JNK
    Feng, B
    [J]. PROCEEDINGS OF 2005 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-9, 2005, : 4122 - 4128