Robust Bayesian linear classifier ensembles

被引:0
|
作者
Cerquides, J [1 ]
de Màntaras, RL
机构
[1] Univ Barcelona, Dept Matemat Aplicada & Anal, Barcelona, Spain
[2] CSIC, Spanish Council Sci Res, Artificial Intelligence Res Inst, IIIA, Madrid, Spain
来源
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ensemble classifiers combine the classification results of several classifiers. Simple ensemble methods such as uniform averaging over a set of models usually provide an improvement over selecting the single best model. Usually probabilistic classifiers restrict the set of possible models that can be learnt in order to lower computational complexity costs. In these restricted spaces, where incorrect modeling assumptions are possibly made, uniform averaging sometimes performs even better than bayesian model averaging. Linear mixtures over sets of models provide an space that includes uniform averaging as a particular case. We develop two algorithms for learning maximum a posteriori weights for linear mixtures, based on expectation maximization and on constrained optimizition. We provide a nontrivial example of the utility of these two algorithms by applying them for one dependence estimators. We develop the conjugate distribution for one dependence estimators and empirically show that uniform averaging is clearly superior to Bayesian model averaging for this family of models, After that we empirically show that the maximum a posteriori linear mixture weights improve accuracy significantly over uniform aggregation.
引用
收藏
页码:72 / 83
页数:12
相关论文
共 50 条
  • [31] Decision tree simplification for classifier ensembles
    Windeatt, T
    Ardeshir, G
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2004, 18 (05) : 749 - 776
  • [32] Evolving Classifier Ensembles with Voting Predictors
    Lanzi, Pier Luca
    Loiacono, Daniele
    Zanini, Matteo
    2008 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-8, 2008, : 3760 - 3767
  • [33] Ensembles of EFuNNs: an architecture for a multimodule classifier
    Woodford, BJ
    Kasabov, NK
    10TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-3: MEETING THE GRAND CHALLENGE: MACHINES THAT SERVE PEOPLE, 2001, : 1573 - 1576
  • [34] Feature Selection with Dynamic Classifier Ensembles
    Kiziloz, Hakan Ezgi
    Deniz, Ayca
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 2038 - 2043
  • [35] A Comparative Study of Classifier Ensembles for Karyotyping
    Barandiaran, Inigo
    Maclair, Gregory
    Goienetxea, Izaro
    Jauquicoa, Carlos
    Grana, Manuel
    ADVANCES IN KNOWLEDGE-BASED AND INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, 2012, 243 : 1400 - 1407
  • [36] Support of Contextual Classifier Ensembles Design
    Jakubczyc, Janina A.
    Owoc, Mieczyslaw L.
    PROCEEDINGS OF THE 2015 FEDERATED CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2015, 5 : 1683 - 1689
  • [37] Learning aggregation for combining classifier ensembles
    Wanas, NM
    Kamel, MS
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 1729 - 1733
  • [38] Cost-conscious classifier ensembles
    Demir, C
    Alpaydin, E
    PATTERN RECOGNITION LETTERS, 2005, 26 (14) : 2206 - 2214
  • [39] Tweet sentiment analysis with classifier ensembles
    da Silva, Nadia F. F.
    Hruschka, Eduardo R.
    Hruschka, Estevam R., Jr.
    DECISION SUPPORT SYSTEMS, 2014, 66 : 170 - 179
  • [40] Polyphonic Music Retrieval with Classifier Ensembles
    Rizo, David
    Inesta, Jose M.
    Lemstrom, Kjell
    JOURNAL OF NEW MUSIC RESEARCH, 2011, 40 (04) : 313 - 324