A Scalable Projective Scaling Algorithm for lp Loss With Convex Penalizations

被引:3
|
作者
Zhou, Hongbo [1 ]
Cheng, Qiang [1 ]
机构
[1] So Illinois Univ, Dept Comp Sci, Carbondale, IL 62901 USA
基金
美国国家科学基金会;
关键词
Convex function; Karmarkar's projective scaling condition; l(p) loss function; message passing algorithm (MPA); minimization-majorization (MM); nonconvex; scalability; FACE RECOGNITION; SPARSE; REGRESSION; CONVERGENCE; SHRINKAGE; SELECTION;
D O I
10.1109/TNNLS.2014.2314129
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have a l(p) loss function as an additive component. For this problem, well-known learning algorithms often have well-established results on accuracy and efficiency, but there exists rarely any report on explicit linear scalability with respect to the problem size. The proposed approach starts with developing a second-order learning procedure with iterative descent for general convex penalization functions, and then builds efficient algorithms for a restricted family of functions, which satisfy the Karmarkar's projective scaling condition. Under this condition, a light weight, scalable message passing algorithm (MPA) is further developed by constructing a series of simpler equivalent problems. The proposed MPA is intrinsically scalable because it only involves matrix-vector multiplication and avoids matrix inversion operations. The MPA is proven to be globally convergent for convex formulations; for nonconvex situations, it converges to a stationary point. The accuracy, efficiency, scalability, and applicability of the proposed method are verified through extensive experiments on sparse signal recovery, face image classification, and over-complete dictionary learning problems.
引用
收藏
页码:265 / 276
页数:12
相关论文
共 48 条