Accelerated stochastic approximation with state-dependent noise

被引:0
|
作者
Ilandarideva, Sasila [1 ]
Juditsky, Anatoli [1 ]
Lan, Guanghui [2 ]
Li, Tianjiao [2 ]
机构
[1] Univ Grenoble Alpes, LJK, F-38401 St Martin Dheres, France
[2] Georgia Inst Technol, H Milton Stewart Sch Ind & Syst Engn, Atlanta, GA 30332 USA
关键词
Stochastic optimization; State-dependent noise; Accelerated stochastic approximation; Stochastic gradient extrapolation; Sparse recovery; VARIATIONAL-INEQUALITIES; COMPOSITE OPTIMIZATION; ALGORITHMS; RECOVERY;
D O I
10.1007/s10107-024-02138-4
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We consider a class of stochastic smooth convex optimization problems under rather general assumptions on the noise in the stochastic gradient observation. As opposed to the classical problem setting in which the variance of noise is assumed to be uniformly bounded, herein we assume that the variance of stochastic gradients is related to the "sub-optimality" of the approximate solutions delivered by the algorithm. Such problems naturally arise in a variety of applications, in particular, in the well-known generalized linear regression problem in statistics. However, to the best of our knowledge, none of the existing stochastic approximation algorithms for solving this class of problems attain optimality in terms of the dependence on accuracy, problem parameters, and mini-batch size. We discuss two non-Euclidean accelerated stochastic approximation routines-stochastic accelerated gradient descent (SAGD) and stochastic gradient extrapolation (SGE)-which carry a particular duality relationship. We show that both SAGD and SGE, under appropriate conditions, achieve the optimal convergence rate, attaining the optimal iteration and sample complexities simultaneously. However, corresponding assumptions for the SGE algorithm are more general; they allow, for instance, for efficient application of the SGE to statistical estimation problems under heavy tail noises and discontinuous score functions. We also discuss the application of the SGE to problems satisfying quadratic growth conditions, and show how it can be used to recover sparse solutions. Finally, we report on some simulation experiments to illustrate numerical performance of our proposed algorithms in high-dimensional settings.
引用
收藏
页数:42
相关论文
共 50 条