STOCHASTIC FIRST- AND ZEROTH-ORDER METHODS FOR NONCONVEX STOCHASTIC PROGRAMMING

被引:721
|
作者
Ghadimi, Saeed [1 ]
Lan, Guanghui [1 ]
机构
[1] Univ Florida, Dept Ind & Syst Engn, Gainesville, FL 32611 USA
基金
美国国家科学基金会;
关键词
stochastic approximation; nonconvex optimization; stochastic programming; simulation-based optimization; WORST-CASE COMPLEXITY; APPROXIMATION ALGORITHMS; COMPOSITE OPTIMIZATION; GRADIENT;
D O I
10.1137/120880811
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and we show that such modification allows us to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available.
引用
收藏
页码:2341 / 2368
页数:28
相关论文
共 50 条
  • [41] Accelerated gradient methods for nonconvex nonlinear and stochastic programming
    Ghadimi, Saeed
    Lan, Guanghui
    [J]. MATHEMATICAL PROGRAMMING, 2016, 156 (1-2) : 59 - 99
  • [42] Stochastic first-order methods for convex and nonconvex functional constrained optimization
    Digvijay Boob
    Qi Deng
    Guanghui Lan
    [J]. Mathematical Programming, 2023, 197 : 215 - 279
  • [43] Stochastic first-order methods for convex and nonconvex functional constrained optimization
    Boob, Digvijay
    Deng, Qi
    Lan, Guanghui
    [J]. MATHEMATICAL PROGRAMMING, 2023, 197 (01) : 215 - 279
  • [44] Zeroth-order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates
    Balasubramanian, Krishnakumar
    Ghadimi, Saeed
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [45] Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization
    Ji, Kaiyi
    Wang, Zhe
    Zhou, Yi
    Liang, Yingbin
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [46] Stochastic zeroth-order gradient and Hessian estimators: variance reduction and refined bias bounds
    Feng, Yasong
    Wang, Tianyu
    [J]. INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2023, 12 (03)
  • [47] ZEROTH-ORDER RANDOMIZED SUBSPACE NEWTON METHODS
    Berglund, Erik
    Khirirat, Sarit
    Wang, Xiaoyu
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6002 - 6006
  • [48] A Zeroth-order Resonant Antenna with Bandwidth Extended by Merging the Zeroth-order Mode with the First-negative Mode
    Xin, Yonghao
    Feng, Quanyuan
    Tao, Jun
    [J]. 2017 PROGRESS IN ELECTROMAGNETICS RESEARCH SYMPOSIUM - SPRING (PIERS), 2017, : 185 - 189
  • [49] Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization
    Huang, Feihu
    Gao, Shangqian
    Pei, Jian
    Huang, Heng
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23 : 1 - 70
  • [50] Between First- and Second-Order Stochastic Dominance
    Mueller, Alfred
    Scarsini, Marco
    Tsetlin, Ilia
    Winkler, Robert L.
    [J]. MANAGEMENT SCIENCE, 2017, 63 (09) : 2933 - 2947