Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization

被引:0
|
作者
Bollapragada, Raghu [1 ]
Wild, Stefan M. M. [2 ]
机构
[1] Univ Texas Austin, Operat Res & Ind Engn, Austin, TX 78712 USA
[2] Lawrence Berkeley Natl Lab, Appl Math & Computat Res Div, Berkeley, CA 94720 USA
关键词
Derivative-free optimization; Stochastic oracles; Adaptive sampling; Common random numbers; DERIVATIVE-FREE OPTIMIZATION; DIRECT SEARCH METHOD; BFGS METHOD; CONVEX; RATES;
D O I
10.1007/s12532-023-00233-9
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We consider unconstrained stochastic optimization problems with no available gradient information. Such problems arise in settings from derivative-free simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients using finite differences of stochastic function evaluations within a common random number framework. We develop modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the stochastic approximations and provide global convergence results to the neighborhood of a locally optimal solution. We present numerical experiments on simulation optimization problems to illustrate the performance of the proposed algorithm. When compared with classical zeroth-order stochastic gradient methods, we observe that our strategies of adapting the sample sizes significantly improve performance in terms of the number of stochastic function evaluations required.
引用
收藏
页码:327 / 364
页数:38
相关论文
共 50 条
  • [1] Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
    Raghu Bollapragada
    Stefan M. Wild
    [J]. Mathematical Programming Computation, 2023, 15 : 327 - 364
  • [2] Quasi-Newton methods for stochastic optimization
    Levy, MN
    Trosset, MW
    Kincaid, RR
    [J]. ISUMA 2003: FOURTH INTERNATIONAL SYMPOSIUM ON UNCERTAINTY MODELING AND ANALYSIS, 2003, : 304 - 309
  • [3] Adaptive Evolution Strategies for Stochastic Zeroth-Order Optimization
    He, Xiaoyu
    Zheng, Zibin
    Chen, Zefeng
    Zhou, Yuren
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (05): : 1271 - 1285
  • [4] ZEROTH-ORDER RANDOMIZED SUBSPACE NEWTON METHODS
    Berglund, Erik
    Khirirat, Sarit
    Wang, Xiaoyu
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6002 - 6006
  • [5] Stochastic Quasi-Newton Methods
    Mokhtari, Aryan
    Ribeiro, Alejandro
    [J]. PROCEEDINGS OF THE IEEE, 2020, 108 (11) : 1906 - 1922
  • [6] A Survey of Quasi-Newton Equations and Quasi-Newton Methods for Optimization
    Chengxian Xu
    Jianzhong Zhang
    [J]. Annals of Operations Research, 2001, 103 : 213 - 234
  • [7] Survey of quasi-Newton equations and quasi-Newton methods for optimization
    Xu, CX
    Zhang, JZ
    [J]. ANNALS OF OPERATIONS RESEARCH, 2001, 103 (1-4) : 213 - 234
  • [8] Stochastic Zeroth-order Optimization in High Dimensions
    Wang, Yining
    Du, Simon S.
    Balakrishnan, Sivaraman
    Singh, Aarti
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [9] Stochastic Adaptive Quasi-Newton Methods for Minimizing Expected Values
    Zhou, Chaoxu
    Gao, Wenbo
    Goldfarb, Donald
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [10] Faster Stochastic Quasi-Newton Methods
    Zhang, Qingsong
    Huang, Feihu
    Deng, Cheng
    Huang, Heng
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4388 - 4397