Efficient parameter learning of Bayesian network classifiers

被引:21
|
作者
Zaidi, Nayyar A. [1 ]
Webb, Geoffrey I. [1 ]
Carman, Mark J. [1 ]
Petitjean, Francois [1 ]
Buntine, Wray [1 ]
Hynes, Mike [2 ]
De Sterck, Hans [3 ]
机构
[1] Monash Univ, Fac Informat Technol, Clayton, Vic 3800, Australia
[2] Univ Waterloo, Dept Appl Math, Waterloo, ON, Canada
[3] Monash Univ, Sch Math Sci, Clayton, Vic 3800, Australia
基金
澳大利亚研究理事会;
关键词
PROBABILITY-DISTRIBUTIONS; ALGORITHM;
D O I
10.1007/s10994-016-5619-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances have demonstrated substantial benefits from learning with both generative and discriminative parameters. On the one hand, generative approaches address the estimation of the parameters of the joint distribution-, which for most network types is very computationally efficient (a notable exception to this are Markov networks) and on the other hand, discriminative approaches address the estimation of the parameters of the posterior distribution-and, are more effective for classification, since they fit directly. However, discriminative approaches are less computationally efficient as the normalization factor in the conditional log-likelihood precludes the derivation of closed-form estimation of parameters. This paper introduces a new discriminative parameter learning method for Bayesian network classifiers that combines in an elegant fashion parameters learned using both generative and discriminative methods. The proposed method is discriminative in nature, but uses estimates of generative probabilities to speed-up the optimization process. A second contribution is to propose a simple framework to characterize the parameter learning task for Bayesian network classifiers. We conduct an extensive set of experiments on 72 standard datasets and demonstrate that our proposed discriminative parameterization provides an efficient alternative to other state-of-the-art parameterizations.
引用
收藏
页码:1289 / 1329
页数:41
相关论文
共 50 条
  • [41] An error-driven learning algorithm for Bayesian network classifiers
    Gu, P
    Zhu, QS
    Li, YF
    [J]. Proceedings of the 11th Joint International Computer Conference, 2005, : 412 - 415
  • [42] Learning Continuous Time Bayesian Network Classifiers Using MapReduce
    Villa, Simone
    Rossetti, Marco
    [J]. JOURNAL OF STATISTICAL SOFTWARE, 2014, 62 (03): : 1 - 25
  • [43] Discriminative learning of Bayesian network classifiers via the TM algorithm
    Santafé, G
    Lozano, JA
    Larrañaga, P
    [J]. SYMBOLIC AND QUANTITATIVE APPROACHES TO REASONING WITH UNCERTAINTY, PROCEEDINGS, 2005, 3571 : 148 - 160
  • [44] A Review of Parameter Learning Methods in Bayesian Network
    Ji, Zhiwei
    Xia, Qibiao
    Meng, Guanmin
    [J]. ADVANCED INTELLIGENT COMPUTING THEORIES AND APPLICATIONS, ICIC 2015, PT III, 2015, 9227 : 3 - 12
  • [45] Accurate parameter estimation for Bayesian network classifiers using hierarchical Dirichlet processes
    François Petitjean
    Wray Buntine
    Geoffrey I. Webb
    Nayyar Zaidi
    [J]. Machine Learning, 2018, 107 : 1303 - 1331
  • [46] Accurate parameter estimation for Bayesian network classifiers using hierarchical Dirichlet processes
    Petitjean, Francois
    Buntine, Wray
    Webb, Geoffrey I.
    Zaidi, Nayyar
    [J]. MACHINE LEARNING, 2018, 107 (8-10) : 1303 - 1331
  • [47] On Resource-Efficient Bayesian Network Classifiers and Deep Neural Networks
    Roth, Wolfgang
    Pernkopf, Franz
    Schindler, Gunther
    Froening, Holger
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10297 - 10304
  • [48] Boosted Bayesian network classifiers
    Jing, Yushi
    Pavlovic, Vladimir
    Rehg, James M.
    [J]. MACHINE LEARNING, 2008, 73 (02) : 155 - 184
  • [49] Approximate Bayesian network classifiers
    Slezak, D
    Wróblewski, J
    [J]. ROUGH SETS AND CURRENT TRENDS IN COMPUTING, PROCEEDINGS, 2002, 2475 : 365 - 372
  • [50] Adaptive Bayesian network classifiers
    Castillo, Gladys
    Gama, Joao
    [J]. INTELLIGENT DATA ANALYSIS, 2009, 13 (01) : 39 - 59