Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization

被引:0
|
作者
Raghu Bollapragada
Stefan M. Wild
机构
[1] The University of Texas at Austin,Operations Research and Industrial Engineering
[2] Lawrence Berkeley National Laboratory,Applied Mathematics and Computational Research Division
来源
关键词
Derivative-free optimization; Stochastic oracles; Adaptive sampling; Common random numbers; 90C56; 65K05; 90C15; 90C30; 90C53;
D O I
暂无
中图分类号
学科分类号
摘要
We consider unconstrained stochastic optimization problems with no available gradient information. Such problems arise in settings from derivative-free simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients using finite differences of stochastic function evaluations within a common random number framework. We develop modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the stochastic approximations and provide global convergence results to the neighborhood of a locally optimal solution. We present numerical experiments on simulation optimization problems to illustrate the performance of the proposed algorithm. When compared with classical zeroth-order stochastic gradient methods, we observe that our strategies of adapting the sample sizes significantly improve performance in terms of the number of stochastic function evaluations required.
引用
收藏
页码:327 / 364
页数:37
相关论文
共 50 条
  • [21] STOCHASTIC FIRST- AND ZEROTH-ORDER METHODS FOR NONCONVEX STOCHASTIC PROGRAMMING
    Ghadimi, Saeed
    Lan, Guanghui
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2013, 23 (04) : 2341 - 2368
  • [22] An Adaptive Quasi-Newton Equation for Unconstrained Optimization
    Hassan, Basin A.
    Ayoob, Abdulrahman R.
    [J]. PROCEEDING OF 2021 2ND INFORMATION TECHNOLOGY TO ENHANCE E-LEARNING AND OTHER APPLICATION (IT-ELA 2021), 2021, : 1 - 5
  • [23] Sequential stochastic blackbox optimization with zeroth-order gradient estimators
    Audet, Charles
    Bigeon, Jean
    Couderc, Romain
    Kokkolaras, Michael
    [J]. AIMS MATHEMATICS, 2023, 8 (11): : 25922 - 25956
  • [24] ZEROTH-ORDER STOCHASTIC PROJECTED GRADIENT DESCENT FOR NONCONVEX OPTIMIZATION
    Liu, Sijia
    Li, Xingguo
    Chen, Pin-Yu
    Haupt, Jarvis
    Amini, Lisa
    [J]. 2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1179 - 1183
  • [25] A Generic Approach for Accelerating Stochastic Zeroth-Order Convex Optimization
    Yu, Xiaotian
    King, Irwin
    Lyu, Michael R.
    Yang, Tianbao
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3040 - 3046
  • [26] A zeroth-order algorithm for distributed optimization with stochastic stripe observations
    Yinghui Wang
    Xianlin Zeng
    Wenxiao Zhao
    Yiguang Hong
    [J]. Science China Information Sciences, 2023, 66
  • [27] A zeroth-order algorithm for distributed optimization with stochastic stripe observations
    Wang, Yinghui
    Zeng, Xianlin
    Zhao, Wenxiao
    Hong, Yiguang
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2023, 66 (09)
  • [28] A zeroth-order algorithm for distributed optimization with stochastic stripe observations
    Yinghui WANG
    Xianlin ZENG
    Wenxiao ZHAO
    Yiguang HONG
    [J]. Science China(Information Sciences), 2023, 66 (09) : 297 - 298
  • [29] On quasi-Newton methods with modified quasi-Newton equation
    Xiao, Wei
    Sun, Fengjian
    [J]. PROCEEDINGS OF 2008 INTERNATIONAL PRE-OLYMPIC CONGRESS ON COMPUTER SCIENCE, VOL II: INFORMATION SCIENCE AND ENGINEERING, 2008, : 359 - 363
  • [30] A single timescale stochastic quasi-Newton method for stochastic optimization
    Wang, Peng
    Zhu, Detong
    [J]. INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 2023, 100 (12) : 2196 - 2216