regularization;
dropout;
weight-constrained networks;
penalty functions;
pooling;
data augmentation;
early stopping;
adversarial learning;
D O I:
10.3390/electronics11040521
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
Over the last decade, learning theory performed significant progress in the development of sophisticated algorithms and their theoretical foundations. The theory builds on concepts that exploit ideas and methodologies from mathematical areas such as optimization theory. Regularization is probably the key to address the challenging problem of overfitting, which usually occurs in high-dimensional learning. Its primary goal is to make the machine learning algorithm "learn " and not "memorize " by penalizing the algorithm to reduce its generalization error in order to avoid the risk of overfitting. As a result, the variance of the model is significantly reduced, without substantial increase in its bias and without losing any important properties in the data.
机构:
Univ Las Palmas Gran Canaria, Inst Technol Dev & Innovat Commun, Las Palmas Gran Canaria, SpainUniv Las Palmas Gran Canaria, Inst Technol Dev & Innovat Commun, Las Palmas Gran Canaria, Spain
Travieso, Carlos M.
Fodor, Janos
论文数: 0引用数: 0
h-index: 0
机构:
Obuda Univ, Budapest, HungaryUniv Las Palmas Gran Canaria, Inst Technol Dev & Innovat Commun, Las Palmas Gran Canaria, Spain
Fodor, Janos
Alonso, Jesus B.
论文数: 0引用数: 0
h-index: 0
机构:
Univ Las Palmas Gran Canaria, Inst Technol Dev & Innovat Commun, Las Palmas Gran Canaria, SpainUniv Las Palmas Gran Canaria, Inst Technol Dev & Innovat Commun, Las Palmas Gran Canaria, Spain