Special Issue: Regularization Techniques for Machine Learning and Their Applications

被引:6
|
作者
Kotsilieris, Theodore [1 ]
Anagnostopoulos, Ioannis [2 ]
Livieris, Ioannis E. [3 ]
机构
[1] Univ Peloponnese, Dept Business Adm, GR-24100 Kalamata, Greece
[2] Univ Thessaly, Dept Comp Sci & Biomed Informat, GR-35100 Volos, Greece
[3] Core Innovat & Technol OE, GR-11745 Athens, Greece
关键词
regularization; dropout; weight-constrained networks; penalty functions; pooling; data augmentation; early stopping; adversarial learning;
D O I
10.3390/electronics11040521
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Over the last decade, learning theory performed significant progress in the development of sophisticated algorithms and their theoretical foundations. The theory builds on concepts that exploit ideas and methodologies from mathematical areas such as optimization theory. Regularization is probably the key to address the challenging problem of overfitting, which usually occurs in high-dimensional learning. Its primary goal is to make the machine learning algorithm "learn " and not "memorize " by penalizing the algorithm to reduce its generalization error in order to avoid the risk of overfitting. As a result, the variance of the model is significantly reduced, without substantial increase in its bias and without losing any important properties in the data.
引用
收藏
页数:3
相关论文
共 50 条