Efficient learning ensemble SuperParent-one-dependence estimator by maximizing conditional log likelihood

被引:4
|
作者
Zheng, Xiaolin [1 ]
Lin, Zhen [1 ,2 ]
Xu, Huan [1 ]
Chen, Chaochao [1 ,2 ]
Ye, Ting [3 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Univ Illinois, Dept Comp Sci, Urbana, IL 61801 USA
[3] Zhejiang Univ, Dept Math, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
Classification; Gradient methods; Machine learning; Modeling structured; Conditional likelihood; NAIVE BAYES; CLASSIFIER;
D O I
10.1016/j.eswa.2015.05.051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ensemble of SuperParent one-dependence estimators (SPODEs) is one of the most effective improved algorithms. It achieves high classification accuracy while decreasing variance. However, most existing approaches only focus on performance improvement of individual SPODEs in selection and weighting procedures but overlook the importance of the entire ensemble model. Based on the assumption that the performance of the entire ensemble classifier can obtain better weight distribution than using the greedy strategy inside each SPODE, we propose an ensemble SPODE algorithm by maximizing conditional log likelihood (EODE-CLL). First, we choose the maximum conditional probability as the global optimization goal, which can avoid over-fitting problem compared with the least squares error. Second, the algorithm assigns hierarchical weights for SPODEs and the attributes inside SPODE. The second weight layer can help fully optimize local SPODE model. Finally, stochastic gradient descent method is used to search best parameters. It has good scalability, which has spawned batch and distributed version. Compared to the existing ensemble SPODEs, our proposed model achieves more accurate and robust classification results, while shows better time complexity. We conduct experiments on a public benchmark containing 36 datasets. The results of the experiments show that our EODE-CLL significantly outperforms state-of-the-art ensemble SPODE methods in terms of accuracy, F-measure, bias, and variance. (C) 2015 Elsevier Ltd. All rights reserved.
引用
收藏
页码:7732 / 7745
页数:14
相关论文
共 15 条
  • [1] Ensemble selection for SuperParent-One-Dependence Estimators
    Yang, Y
    Korb, K
    Ting, KM
    Webb, GI
    [J]. AI 2005: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2005, 3809 : 102 - 112
  • [2] Boosting for superparent-one-dependence estimators
    Wu, Jia
    Cai, Zhi-hua
    [J]. INTERNATIONAL JOURNAL OF COMPUTING SCIENCE AND MATHEMATICS, 2013, 4 (03) : 277 - 286
  • [3] Boosting for superparent-one-dependence estimators
    [J]. Cai, Z.-H. (zhcai@cug.edu.cn), 1600, Inderscience Enterprises Ltd., 29, route de Pre-Bois, Case Postale 856, CH-1215 Geneva 15, CH-1215, Switzerland (04):
  • [4] To select or to weigh: A comparative study of linear combination schemes for superparent-one-dependence estimators
    Yang, Ying
    Webb, Geoffrey I.
    Cerquides, Jesus
    Korb, Kevin B.
    Boughton, Janice
    Ting, Kai Ming
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2007, 19 (12) : 1652 - 1665
  • [5] LEARNING DECISION TREES WITH LOG CONDITIONAL LIKELIHOOD
    Liang, Han
    Yan, Yuhong
    Zhang, Harry
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2010, 24 (01) : 117 - 151
  • [6] Active learning algorithm using the maximum weighted log-likelihood estimator
    Kanamori, T
    Shimodaira, H
    [J]. JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2003, 116 (01) : 149 - 162
  • [7] A NOTE ON CONDITIONS FOR THE ASYMPTOTIC NORMALITY OF THE CONDITIONAL MAXIMUM-LIKELIHOOD ESTIMATOR IN LOG ODDS RATIO REGRESSION
    FORBES, AB
    SANTNER, TJ
    [J]. STATISTICS & PROBABILITY LETTERS, 1993, 18 (02) : 137 - 146
  • [8] Prototype Learning with Margin-Based Conditional Log-likelihood Loss
    Jin, Xiaobo
    Liu, Cheng-Lin
    Hou, Xinwen
    [J]. 19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOLS 1-6, 2008, : 22 - 25
  • [9] Discriminative Learning of Bayesian Networks via Factorized Conditional Log-Likelihood
    Carvalho, Alexandra M.
    Roos, Teemu
    Oliveira, Arlindo L.
    Myllymaki, Petri
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 2181 - 2210
  • [10] MICCLLR: Multiple-Instance Learning Using Class Conditional Log Likelihood Ratio
    EL-Manzalawy, Yasser
    Honavar, Vasant
    [J]. DISCOVERY SCIENCE, PROCEEDINGS, 2009, 5808 : 80 - 91