Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization

被引:0
|
作者
Sayan Mukherjee
Partha Niyogi
Tomaso Poggio
Ryan Rifkin
机构
[1] Center for Biological and Computational Learning,MIT/Whitehead Institute
[2] Artificial Intelligence Laboratory,Department of Computer Science and Statistics
[3] and McGovern Institute,undefined
[4] Center for Genome Research,undefined
[5] Massachusetts Institute of Technology,undefined
[6] University of Chicago,undefined
[7] Honda Research Institute,undefined
来源
Advances in Computational Mathematics | 2006年 / 25卷
关键词
stability; inverse problems; generalization; consistency; empirical risk minimization; uniform Glivenko–Cantelli;
D O I
暂无
中图分类号
学科分类号
摘要
Solutions of learning problems by Empirical Risk Minimization (ERM) – and almost-ERM when the minimizer does not exist – need to be consistent, so that they may be predictive. They also need to be well-posed in the sense of being stable, so that they might be used robustly. We propose a statistical form of stability, defined as leave-one-out (LOO) stability. We prove that for bounded loss classes LOO stability is (a) sufficient for generalization, that is convergence in probability of the empirical error to the expected error, for any algorithm satisfying it and, (b) necessary and sufficient for consistency of ERM. Thus LOO stability is a weak form of stability that represents a sufficient condition for generalization for symmetric learning algorithms while subsuming the classical conditions for consistency of ERM. In particular, we conclude that a certain form of well-posedness and consistency are equivalent for ERM.
引用
收藏
页码:161 / 193
页数:32
相关论文
共 50 条