How Good Is Crude MDL for Solving the Bias-Variance Dilemma? An Empirical Investigation Based on Bayesian Networks

被引:3
|
作者
Cruz-Ramirez, Nicandro [1 ]
Gabriel Acosta-Mesa, Hector [1 ]
Mezura-Montes, Efren [1 ]
Guerra-Hernandez, Alejandro [1 ]
de Jesus Hoyos-Rivera, Guillermo [1 ]
Erandi Barrientos-Martinez, Rocio [1 ]
Gutierrez-Fragoso, Karina [2 ]
Alonso Nava-Fernandez, Luis [3 ]
Gonzalez-Gaspar, Patricia [1 ]
Maria Novoa-del-Toro, Elva [1 ]
Josue Aguilera-Rueda, Vicente [1 ]
Yaneli Ameca-Alducin, Maria [1 ]
机构
[1] Univ Veracruzana, Fac Fis & Inteligencia Artificial, Xalapa 91000, Veracruz, Mexico
[2] Univ Veracruzana, Ctr Invest Biomed, Xalapa 91000, Veracruz, Mexico
[3] UNAM, Ctr Alta Tecnol Educ Distancia, Tlaxcala, Mexico
来源
PLOS ONE | 2014年 / 9卷 / 03期
关键词
MODEL SELECTION; PROBABILISTIC NETWORKS; CLASSIFIERS; INFORMATION; DISCRETIZATION; DISTRIBUTIONS; ALGORITHMS; KNOWLEDGE; VARIABLES;
D O I
10.1371/journal.pone.0092866
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
引用
收藏
页数:26
相关论文
empty
未找到相关数据