Boosting for superparent-one-dependence estimators

被引:0
|
作者
Wu, Jia [1 ]
Cai, Zhi-hua [1 ]
机构
[1] China Univ Geosci, Sch Comp Sci, 388 Lumo Rd, Wuhan, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Bayesian; superparent-one-dependence estimator; SPODE; aggregating one-dependence estimator; AODE; boosting; classification;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Naive Bayes (NB) is a probability-based classification model based on the conditional independence assumption. However, in many real-world applications, this assumption is often violated. Responding to this fact, superparent-one-dependence estimators (SPODEs) weaken the attribute independence assumption by using each attribute of the database as the superparent. Aggregating one-dependence estimators (AODEs), which estimates the corresponding parameters for every SPODE, has been proved to be one of the most efficient models due to its high accuracy among those improvements for NB classifier. This paper investigates a novel approach to ensemble the single SPODE based on the boosting strategy, Boosting for superparent-one-dependence estimators, simply, BODE. BODE first endows every instance a weight, and then find an optimal SPODE with highest accuracy in each iteration as a weak classifier. By doing so, BODE boosts all the selected weak classifiers to do the classification in the test processing. Experiments on UCI datasets demonstrate the algorithm performance.
引用
收藏
页码:277 / 286
页数:10
相关论文
共 50 条
  • [1] Boosting for superparent-one-dependence estimators
    [J]. Cai, Z.-H. (zhcai@cug.edu.cn), 1600, Inderscience Enterprises Ltd., 29, route de Pre-Bois, Case Postale 856, CH-1215 Geneva 15, CH-1215, Switzerland (04):
  • [2] Ensemble selection for SuperParent-One-Dependence Estimators
    Yang, Y
    Korb, K
    Ting, KM
    Webb, GI
    [J]. AI 2005: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2005, 3809 : 102 - 112
  • [3] To select or to weigh: A comparative study of linear combination schemes for superparent-one-dependence estimators
    Yang, Ying
    Webb, Geoffrey I.
    Cerquides, Jesus
    Korb, Kevin B.
    Boughton, Janice
    Ting, Kai Ming
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2007, 19 (12) : 1652 - 1665
  • [4] Efficient learning ensemble SuperParent-one-dependence estimator by maximizing conditional log likelihood
    Zheng, Xiaolin
    Lin, Zhen
    Xu, Huan
    Chen, Chaochao
    Ye, Ting
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2015, 42 (21) : 7732 - 7745
  • [5] Instance-based weighting filter for superparent one-dependence estimators
    Duan, Zhiyi
    Wang, Limin
    Chen, Shenglei
    Sun, Minghui
    [J]. KNOWLEDGE-BASED SYSTEMS, 2020, 203
  • [6] Random one-dependence estimators
    Jiang, Liangxiao
    [J]. PATTERN RECOGNITION LETTERS, 2011, 32 (03) : 532 - 539
  • [7] Weightily averaged one-dependence estimators
    Jiang, Liangxiao
    Zhang, Harry
    [J]. PRICAI 2006: TRENDS IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2006, 4099 : 970 - 974
  • [8] Lazy averaged one-dependence estimators
    Jiang, Liangxiao
    Zhang, Harry
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2006, 4013 : 515 - 525
  • [9] Weighted average of one-dependence estimators
    Jiang, Liangxiao
    Zhang, Harry
    Cai, Zhihua
    Wang, Dianhong
    [J]. JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 2012, 24 (02) : 219 - 230
  • [10] Boosting regression estimators
    Avnimelech, R
    Intrator, N
    [J]. NEURAL COMPUTATION, 1999, 11 (02) : 499 - 520