On the time complexity of regularized least square

被引:1
|
作者
Gori, Marco [1 ]
机构
[1] Univ Siena, Dipartimento Ingn Informaz, I-53100 Siena, Italy
来源
NEURAL NETS WIRN11 | 2011年 / 234卷
关键词
Computational complexity; condition number; kernel machines; regularized least square;
D O I
10.3233/978-1-60750-972-1-85
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the general framework of kernel machines, the adoption of the hinge loss has become more popular than square loss, also because of computational reasons. Since learning reduces to a linear system of equations, in case of very large tasks for which the number of examples is proportional to the input dimension, the solution of square loss regularization is O(l(3)), where l is the number of examples, and it has been claimed that learning is unaffordable for large scale problems. However, this is only an upper bound, and in-depth experimental analyses indicate that for linear kernels (or in other cases where the kernel matrix will be sparse or decomposed in a way that is known a priori), regularized least square (RLS) is substantially faster than support vector machine (SVM) both at training and test times. In this paper, we give theoretical results to support those experimental findings by proving that there are conditions under which learning of square loss regularization is Theta(l) even for large input dimensions d for which d similar or equal to l.
引用
收藏
页码:85 / 96
页数:12
相关论文
共 50 条