Boosting conditional probability estimators

被引:1
|
作者
Gutfreund, Dan [1 ]
Kontorovich, Aryeh [2 ]
Levy, Ran [1 ]
Rosen-Zvi, Michal [1 ]
机构
[1] IBM Res, Ruschlikon, Switzerland
[2] Ben Gurion Univ Negev, Beer Sheva, Israel
基金
以色列科学基金会;
关键词
Boosting; Conditional density; REGRESSION;
D O I
10.1007/s10472-015-9465-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the standard agnostic multiclass model, < instance, label > pairs are sampled independently from some underlying distribution. This distribution induces a conditional probability over the labels given an instance, and our goal in this paper is to learn this conditional distribution. Since even unconditional densities are quite challenging to learn, we give our learner access to < instance, conditional distribution > pairs. Assuming a base learner oracle in this model, we might seek a boosting algorithm for constructing a strong learner. Unfortunately, without further assumptions, this is provably impossible. However, we give a new boosting algorithm that succeeds in the following sense: given a base learner guaranteed to achieve some average accuracy (i.e., risk), we efficiently construct a learner that achieves the same level of accuracy with arbitrarily high probability. We give generalization guarantees of several different kinds, including distribution-free accuracy and risk bounds. None of our estimates depend on the number of boosting rounds and some of them admit dimension-free formulations.
引用
收藏
页码:129 / 144
页数:16
相关论文
共 50 条