Model Selection by Linear Programming

被引:0
|
作者
Wang, Joseph [1 ]
Bolukbasi, Tolga [1 ]
Trapeznikov, Kirill [1 ]
Saligrama, Venkatesh [1 ]
机构
[1] Boston Univ, Boston, MA 02215 USA
来源
关键词
test-time budget; adaptive model selection; cost-sensitive learning;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Budget constraints arise in many computer vision problems. Computational costs limit many automated recognition systems while crowdsourced systems are hindered by monetary costs. We leverage wide variability in image complexity and learn adaptive model selection policies. Our learnt policy maximizes performance under average budget constraints by selecting "cheap" models for low complexity instances and utilizing descriptive models only for complex ones. During training, we assume access to a set of models that utilize features of different costs and types. We consider a binary tree architecture where each leaf corresponds to a different model. Internal decision nodes adaptively guide model-selection process along paths on a tree. The learning problem can be posed as an empirical risk minimization over training data with a non-convex objective function. Using hinge loss surrogates we show that adaptive model selection reduces to a linear program thus realizing substantial computational efficiencies and guaranteed convergence properties.
引用
收藏
页码:647 / 662
页数:16
相关论文
共 50 条