Variance-Reduced Accelerated First-Order Methods: Central Limit Theorems and Confidence Statements

被引:0
|
作者
Lei, Jinlong [1 ,2 ]
Shanbhag, Uday V. [3 ]
机构
[1] Tongji Univ, Dept Control Sci & Engn, Shanghai 201804, Peoples R China
[2] Tongji Univ, Shanghai Res Inst Intelligent Autonomous Syst, Shanghai 201804, Peoples R China
[3] Penn State Univ, Dept Ind & Mfg Engn, University Pk, PA 16802 USA
基金
中国国家自然科学基金;
关键词
stochastic optimization; variance-reduced schemes; central limit theorems; confidence intervals; STOCHASTIC-APPROXIMATION; ASYMPTOTIC NORMALITY; GRADIENT; OPTIMIZATION; PARAMETERS;
D O I
10.1287/moor.2021.0068
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
In this paper, we consider a strongly convex stochastic optimization problem and propose three classes of variable sample -size stochastic first -order methods: (i) the standard stochastic gradient descent method, (ii) its accelerated variant, and (iii) the stochastic heavy -ball method. In each scheme, the exact gradients are approximated by averaging across an increasing batch size of sampled gradients. We prove that when the sample size increases at a geometric rate, the generated estimates converge in mean to the optimal solution at an analogous geometric rate for schemes (i)-(iii). Based on this result, we provide central limit statements, whereby it is shown that the rescaled estimation errors converge in distribution to a normal distribution with the associated covariance matrix dependent on the Hessian matrix, the covariance of the gradient noise, and the step length. If the sample size increases at a polynomial rate, we show that the estimation errors decay at a corresponding polynomial rate and establish the associated central limit theorems (CLTs). Under certain conditions, we discuss how both the algorithms and the associated limit theorems may be extended to constrained and nonsmooth regimes. Finally, we provide an avenue to construct confidence regions for the optimal solution based on the established CLTs and test the theoretical findings on a stochastic parameter estimation problem.
引用
收藏
页数:35
相关论文
共 27 条
  • [1] Variance-reduced First-order Meta-learning for Natural Language Processing Tasks
    Wang, Lingxiao
    Huang, Kevin
    Ma, Tengyu
    Gu, Quanquan
    Huang, Jing
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 2609 - 2615
  • [2] Accelerated variance-reduced methods for saddle-point problems
    Borodich, Ekaterina
    Tominin, Vladislav
    Tominin, Yaroslav
    Kovalev, Dmitry
    Gasnikov, Alexander
    Dvurechensky, Pavel
    EURO JOURNAL ON COMPUTATIONAL OPTIMIZATION, 2022, 10
  • [3] Accelerated variance-reduced methods for saddle-point problems
    Borodich, Ekaterina
    Tominin, Vladislav
    Tominin, Yaroslav
    Kovalev, Dmitry
    Gasnikov, Alexander
    Dvurechensky, Pavel
    EURO Journal on Computational Optimization, 2022, 10
  • [4] Accelerated first-order methods for hyperbolic programming
    James Renegar
    Mathematical Programming, 2019, 173 : 1 - 35
  • [5] Transient growth of accelerated first-order methods
    Samuelson, Samantha
    Mohammadi, Hesameddin
    Jovanovic, Mihailo R.
    2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, : 2858 - 2863
  • [6] Accelerated first-order methods for hyperbolic programming
    Renegar, James
    MATHEMATICAL PROGRAMMING, 2019, 173 (1-2) : 1 - 35
  • [7] Accelerated first-order methods for a class of semidefinite programs
    Wang, Alex L.
    Kilinc-Karzan, Fatma
    MATHEMATICAL PROGRAMMING, 2025, 209 (1-2) : 503 - 556
  • [8] Accelerated Development of the First-Order Central Auditory Neurons With Spontaneous Activity
    Yin, Xin-Lu
    Jie, Hui-Qun
    Liang, Min
    Gong, Li-Na
    Liu, Han-Wei
    Pan, Hao-Lai
    Xing, Ya-Zhi
    Shi, Hai-Bo
    Li, Chun-Yan
    Wang, Lu-Yang
    Yin, Shan-Kai
    FRONTIERS IN MOLECULAR NEUROSCIENCE, 2018, 11
  • [9] Variance amplification of accelerated first-order algorithms for strongly convex quadratic optimization problems
    Mohammadi, Hesameddin
    Razaviyayn, Meisam
    Jovanovic, Mihailo R.
    2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2018, : 5753 - 5758
  • [10] Accelerated First-order Methods for Geodesically Convex Optimization on Riemannian Manifolds
    Liu, Yuanyuan
    Shang, Fanhua
    Cheng, James
    Cheng, Hong
    Jiao, Licheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30