Smooth approximation method for non-smooth empirical risk minimization based distance metric learning

被引:3
|
作者
Shi, Ya [1 ]
Ji, Hongbing [1 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
关键词
Distance metric learning; Empirical risk minimization; Smooth approximation; Nesterov's optimal first-order method;
D O I
10.1016/j.neucom.2013.08.030
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Distance metric learning (DML) has become a very active research field in recent years. Bian and Tao (IEEE Trans. Neural Netw. Learn. Syst. 23(8) (2012) 1194-1205) presented a constrained empirical risk minimization (ERM) framework for DML. In this paper, we utilize smooth approximation method to make their algorithm applicable to the non-differentiable hinge loss function. We show that the objective function with hinge loss is equivalent to a non-smooth min-max representation, from which an approximate objective function is derived. Compared to the original objective function, the approximate one becomes differentiable with Lipschitz-continuous gradient. Consequently, Nesterov's optimal first-order method can be directly used. Finally, the effectiveness of our method is evaluated on various UCI datasets. (C) 2013 Elsevier B.V. All rights reserved.
引用
收藏
页码:135 / 143
页数:9
相关论文
共 50 条