ROBUST ACCELERATED GRADIENT METHODS FOR SMOOTH STRONGLY CONVEX FUNCTIONS

被引:29
|
作者
Aybat, Necdet Serhat [1 ]
Fallah, Alireza [2 ]
Gurbuzbalaban, Mert [3 ]
Ozdaglar, Asuman [2 ]
机构
[1] Penn State Univ, Dept Ind & Mfg Engn, University Pk, PA 16802 USA
[2] MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
[3] Rutgers State Univ, Dept Management Sci & Informat Syst, Piscataway, NJ 08854 USA
关键词
convex optimization; stochastic approximation; robust control theory; accelerated methods; Nesterov's method; matrix inequalities; STOCHASTIC-APPROXIMATION ALGORITHMS; OPTIMIZATION ALGORITHMS; COMPOSITE OPTIMIZATION; H-2;
D O I
10.1137/19M1244925
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We study the trade-offs between convergence rate and robustness to gradient errors in designing a first-order algorithm. We focus on gradient descent and accelerated gradient (AG) methods for minimizing strongly convex functions when the gradient has random errors in the form of additive white noise. With gradient errors, the function values of the iterates need not converge to the optimal value; hence, we define the robustness of an algorithm to noise as the asymptotic expected suboptimality of the iterate sequence to input noise power. For this robustness measure, we provide exact expressions for the quadratic case using tools from robust control theory and tight upper bounds for the smooth strongly convex case using Lyapunov functions certified through matrix inequalities. We use these characterizations within an optimization problem which selects parameters of each algorithm to achieve a particular trade-off between rate and robustness. Our results show that AG can achieve acceleration while being more robust to random gradient errors. This behavior is quite different than previously reported in the deterministic gradient noise setting. We also establish some connections between the robustness of an algorithm and how quickly it can converge back to the optimal solution if it is perturbed from the optimal point with deterministic noise. Our framework also leads to practical algorithms that can perform better than other state-of-the-art methods in the presence of random gradient noise.
引用
收藏
页码:717 / 751
页数:35
相关论文
共 50 条
  • [41] On Strongly Generalized Convex Functions
    Awan, Muhammad Uzair
    Noor, Muhammad Aslam
    Noor, Khalida Inayat
    Safdar, Farhat
    FILOMAT, 2017, 31 (18) : 5783 - 5790
  • [42] STRONGLY EXPONENTIALLY CONVEX FUNCTIONS
    Noor, Muhammad Aslam
    Noor, Khalida Inayat
    UNIVERSITY POLITEHNICA OF BUCHAREST SCIENTIFIC BULLETIN-SERIES A-APPLIED MATHEMATICS AND PHYSICS, 2019, 81 (04): : 75 - 84
  • [43] Strongly convex matrix functions
    Sano, Takashi
    ACTA SCIENTIARUM MATHEMATICARUM, 2024, 90 (3-4): : 637 - 647
  • [44] ON THE APPROXIMATION BY φ-STRONGLY CONVEX FUNCTIONS
    Bahos-Orjuela, Cesar M.
    Herrera-Cruz, Briyith L.
    Ramos-Fernandez, Julio C.
    REAL ANALYSIS EXCHANGE, 2024, 49 (01) : 1 - 12
  • [45] Fractional order gradient methods for a general class of convex functions
    Chen, Yuquan
    Wei, Yiheng
    Wang, Yong
    Chen, YangQuan
    2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 2018, : 3763 - 3767
  • [46] GRADIENT METHODS OF MAXIMIZATION OF CONVEX-FUNCTIONS ON DISCRETE STRUCTURES
    KOVALEV, MM
    CYBERNETICS, 1985, 21 (06): : 819 - 830
  • [47] Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
    Ahmed Khaled
    Othmane Sebbouh
    Nicolas Loizou
    Robert M. Gower
    Peter Richtárik
    Journal of Optimization Theory and Applications, 2023, 199 (2) : 499 - 540
  • [48] Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
    Khaled, Ahmed
    Sebbouh, Othmane
    Loizou, Nicolas
    Gower, Robert M.
    Richtarik, Peter
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2023, 199 (02) : 499 - 540
  • [49] Continuous and Discrete-time Accelerated Stochastic Mirror Descent for Strongly Convex Functions
    Xu, Pan
    Wang, Tianhao
    Gu, Quanquan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [50] New Penalized Stochastic Gradient Methods for Linearly Constrained Strongly Convex Optimization
    Li, Meng
    Grigas, Paul
    Atamturk, Alper
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2025, 205 (02)