An approximation theory approach to learning with l1 regularization
被引:13
|
作者:
Wang, Hong-Yan
论文数: 0引用数: 0
h-index: 0
机构:
Zhejiang Gongshang Univ, Sch Math & Stat, Hangzhou 310018, Zhejiang, Peoples R ChinaZhejiang Gongshang Univ, Sch Math & Stat, Hangzhou 310018, Zhejiang, Peoples R China
Wang, Hong-Yan
[1
]
Xiao, Quan-Wu
论文数: 0引用数: 0
h-index: 0
机构:
Microsoft Search Technol Ctr Asia, Beijing 100080, Peoples R ChinaZhejiang Gongshang Univ, Sch Math & Stat, Hangzhou 310018, Zhejiang, Peoples R China
Xiao, Quan-Wu
[2
]
Zhou, Ding-Xuan
论文数: 0引用数: 0
h-index: 0
机构:
City Univ Hong Kong, Dept Math, Kowloon, Hong Kong, Peoples R ChinaZhejiang Gongshang Univ, Sch Math & Stat, Hangzhou 310018, Zhejiang, Peoples R China
Zhou, Ding-Xuan
[3
]
机构:
[1] Zhejiang Gongshang Univ, Sch Math & Stat, Hangzhou 310018, Zhejiang, Peoples R China
[2] Microsoft Search Technol Ctr Asia, Beijing 100080, Peoples R China
[3] City Univ Hong Kong, Dept Math, Kowloon, Hong Kong, Peoples R China
Learning theory;
Data dependent hypothesis spaces;
Kernel-based regularization scheme;
E-1-regularizer;
Multivariate approximation;
MODEL SELECTION;
SPACES;
INTERPOLATION;
REGRESSION;
OPERATORS;
D O I:
10.1016/j.jat.2012.12.004
中图分类号:
O1 [数学];
学科分类号:
0701 ;
070101 ;
摘要:
Regularization schemes with an l(1)-regularizer often produce sparse representations for objects in approximation theory, image processing, statistics and learning theory. In this paper, we study a kernel-based learning algorithm for regression generated by regularization schemes associated with the l(1)-regularizer. We show that convergence rates of the learning algorithm can be independent of the dimension of the input space of the regression problem when the kernel is smooth enough. This confirms the effectiveness of the learning algorithm. Our error analysis is carried out by means of an approximation theory approach using a local polynomial reproduction formula and the nonning set condition. (C) 2012 Elsevier Inc. All rights reserved.