Convergence rates of subgradient methods for quasi-convex optimization problems

被引:0
|
作者
Yaohua Hu
Jiawen Li
Carisa Kwok Wai Yu
机构
[1] Shenzhen University,Shenzhen Key Laboratory of Advanced Machine Learning and Applications, College of Mathematics and Statistics
[2] Shenzhen University,College of Mathematics and Statistics
[3] The Hang Seng University of Hong Kong,Department of Mathematics, Statistics and Insurance
关键词
Quasi-convex programming; Subgradient method; Iteration complexity; Convergence rates; Primary: 65K05; 90C26; Secondary: 49M37;
D O I
暂无
中图分类号
学科分类号
摘要
Quasi-convex optimization acts a pivotal part in many fields including economics and finance; the subgradient method is an effective iterative algorithm for solving large-scale quasi-convex optimization problems. In this paper, we investigate the quantitative convergence theory, including the iteration complexity and convergence rates, of various subgradient methods for solving quasi-convex optimization problems in a unified framework. In particular, we consider a sequence satisfying a general (inexact) basic inequality, and investigate the global convergence theorem and the iteration complexity when using the constant, diminishing or dynamic stepsize rules. More importantly, we establish the linear (or sublinear) convergence rates of the sequence under an additional assumption of weak sharp minima of Hölderian order and upper bounded noise. These convergence theorems are applied to establish the iteration complexity and convergence rates of several subgradient methods, including the standard/inexact/conditional subgradient methods, for solving quasi-convex optimization problems under the assumptions of the Hölder condition and/or the weak sharp minima of Hölderian order.
引用
收藏
页码:183 / 212
页数:29
相关论文
共 50 条