A stochastic subspace approach to gradient-free optimization in high dimensions

被引:9
|
作者
Kozak, David [1 ,2 ]
Becker, Stephen [3 ]
Doostan, Alireza [4 ]
Tenorio, Luis [1 ]
机构
[1] Solea Energy, Overland Pk, KS 66210 USA
[2] Colorado Sch Mines, Dept Appl Math & Stat, Golden, CO 80401 USA
[3] Univ Colorado, Dept Appl Math, Boulder, CO 80309 USA
[4] Univ Colorado, Dept Aerosp Engn Sci, Boulder, CO 80309 USA
基金
美国国家科学基金会;
关键词
Randomized methods; Gradient-free; Gaussian processes; Stochastic gradients; INVERSE PROBLEMS; CONVEX-PROGRAMS; DESCENT; ALGORITHM; MINIMIZATION; UNCERTAINTY; PERFORMANCE; COMPLEXITY; FLOW;
D O I
10.1007/s10589-021-00271-w
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We present a stochastic descent algorithm for unconstrained optimization that is particularly efficient when the objective function is slow to evaluate and gradients are not easily obtained, as in some PDE-constrained optimization and machine learning problems. The algorithm maps the gradient onto a low-dimensional random subspace of dimension l at each iteration, similar to coordinate descent but without restricting directional derivatives to be along the axes. Without requiring a full gradient, this mapping can be performed by computing l directional derivatives (e.g., via forward-mode automatic differentiation). We give proofs for convergence in expectation under various convexity assumptions as well as probabilistic convergence results under strong-convexity. Our method provides a novel extension to the well-known Gaussian smoothing technique to descent in subspaces of dimension greater than one, opening the doors to new analysis of Gaussian smoothing when more than one directional derivative is used at each iteration. We also provide a finite-dimensional variant of a special case of the Johnson-Lindenstrauss lemma. Experimentally, we show that our method compares favorably to coordinate descent, Gaussian smoothing, gradient descent and BFGS (when gradients are calculated via forward-mode automatic differentiation) on problems from the machine learning and shape optimization literature.
引用
收藏
页码:339 / 368
页数:30
相关论文
共 50 条
  • [1] A stochastic subspace approach to gradient-free optimization in high dimensions
    David Kozak
    Stephen Becker
    Alireza Doostan
    Luis Tenorio
    Computational Optimization and Applications, 2021, 79 : 339 - 368
  • [2] Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization
    Chen, Lesi
    Xu, Jing
    Luo, Luo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [3] A conjecture on global optimization using gradient-free stochastic approximation
    Maryak, JL
    Chin, DC
    JOINT CONFERENCE ON THE SCIENCE AND TECHNOLOGY OF INTELLIGENT SYSTEMS, 1998, : 441 - 445
  • [4] Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
    Lin, Tianyi
    Zheng, Zeyu
    Jordan, Michael I.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [5] An adaptive Bayesian approach to gradient-free global optimization
    Yu, Jianneng
    Morozov, Alexandre, V
    NEW JOURNAL OF PHYSICS, 2024, 26 (02):
  • [6] Distributed gradient-free and projection-free algorithm for stochastic constrained optimization
    Hou J.
    Zeng X.
    Chen C.
    Autonomous Intelligent Systems, 2024, 4 (01):
  • [7] Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization
    Ömer Deniz Akyildiz
    Dan Crisan
    Joaquín Míguez
    Statistics and Computing, 2020, 30 : 1645 - 1663
  • [8] Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization
    Akyildiz, Omer Deniz
    Crisan, Dan
    Miguez, Joaquin
    STATISTICS AND COMPUTING, 2020, 30 (06) : 1645 - 1663
  • [9] Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization
    Huang, Feihu
    Gu, Bin
    Huo, Zhouyuan
    Chen, Songcan
    Huang, Heng
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 1503 - 1510
  • [10] Mixed Simultaneous Perturbation Stochastic Approximation for Gradient-Free Optimization with Noisy Measurements
    Wang, Long
    Zhu, Jingyi
    Spall, James C.
    2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 2018, : 3774 - 3779