A quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradients

被引:2
|
作者
Eltahan, Esmail [1 ]
Alpak, Faruk Omer [2 ]
Sepehrnoori, Kamy [1 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] Shell Int Explorat & Prod Inc, Austin, TX USA
关键词
ENSEMBLE-BASED OPTIMIZATION; ALTERNATING-GAS-INJECTION; LIFE-CYCLE OPTIMIZATION; WELL PLACEMENT; RESERVOIR; MINIMIZATION; ALGORITHMS; MANAGEMENT;
D O I
10.1007/s10596-023-10218-1
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The goal of field-development optimization is maximizing the expected value of an objective function, e.g., net present value for a producing oil field or amount of CO2 stored in a subsurface formation, over an ensemble of models that describe the uncertainty range. A single evaluation of the objective function requires solving a system of partial differential equations, which can be computationally costly. Hence, it is most desirable for an optimization algorithm to reduce the number of objective-function evaluations while delivering high convergence rate. Here, we develop a quasi-Newton method that builds on approximate evaluations of objective-function gradients and takes more effective iterative steps using a trust-region approach compared to line search. We implement three gradient formulations: ensemble optimization (EnOpt) gradient, and two variants of the stochastic simplex approximate gradient (StoSAG), all computed using perturbations around the point of interest. We modify the formulations to enable exploiting the objective-function structure. Instead of returning a single value for the gradient, the reformulation breaks up the objective function into its sub-components and returns a set of sub-gradients. We then can utilize our prior problem-specific knowledge through passing a 'weight' matrix to act on the sub-gradients. Two quasi-Newton updating algorithms are implemented: Broyden-Fletcher-Goldfarb-Shanno and the symmetric rank 1. We first evaluate the variants of our method on test challenging functions (e.g., stochastic variants of Rosenbrock and Chebyquad). Then, we present an application to a well-control optimization problem for a realistic synthetic problem. Our results confirm that StoSAG gradients are significantly more effective than EnOpt gradients for accelerating convergence. An important challenge to stochastic gradients is determining a priori the adequate number of perturbations. We report that the optimal number of perturbations depends on both the number of decision variables and the size of uncertainty ensemble and provide practical guidelines for its selection. We show on the test functions that imposing our prior knowledge on the problem structure can improve the gradient quality and significantly accelerate convergence. In many instances, the quasi-Newton algorithms deliver superior performance compared to the steepest-descent algorithm, especially during the early iterations. Given the computational cost involved in typical applications, rapid and noteworthy improvements at early iterations is greatly desirable for accelerated project delivery. Furthermore, our method is robust, exploits parallel processing, and can be readily applied in a generic fashion for a variety of problems where the true gradient is difficult to compute or simply not available.
引用
收藏
页码:627 / 648
页数:22
相关论文
共 50 条
  • [1] A quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradients
    Esmail Eltahan
    Faruk Omer Alpak
    Kamy Sepehrnoori
    [J]. Computational Geosciences, 2023, 27 : 627 - 648
  • [2] A quasi-Newton trust-region method
    Gertz, EM
    [J]. MATHEMATICAL PROGRAMMING, 2004, 100 (03) : 447 - 470
  • [3] A quasi-Newton trust-region method
    E. Michael Gertz
    [J]. Mathematical Programming, 2004, 100 : 447 - 470
  • [4] AN LDLT TRUST-REGION QUASI-NEWTON METHOD
    Brust, Johannes J.
    Gill, Philip E.
    [J]. SIAM Journal on Scientific Computing, 2024, 46 (05):
  • [5] A PROXIMAL QUASI-NEWTON TRUST-REGION METHOD FOR NONSMOOTH REGULARIZED OPTIMIZATION
    Aravkin, Aleksandr Y.
    Baraldi, Robert
    Orban, Dominique
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2022, 32 (02) : 900 - 929
  • [6] A nonmonotone quasi-Newton trust-region method of conic model for unconstrained optimization
    Qu, Shao-Jian
    Zhang, Qing-Pu
    Jiang, Su-Da
    [J]. OPTIMIZATION METHODS & SOFTWARE, 2009, 24 (03): : 339 - 367
  • [7] A limited memory quasi-Newton trust-region method for box constrained optimization
    Rahpeymaii, Farzad
    Kimiaei, Morteza
    Bagheri, Alireza
    [J]. JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2016, 303 : 105 - 118
  • [8] Deep Neural Networks Training by Stochastic Quasi-Newton Trust-Region Methods
    Yousefi, Mahsa
    Martinez, Angeles
    [J]. ALGORITHMS, 2023, 16 (10)
  • [9] Quasi-Newton Trust Region Policy Optimization
    Jha, Devesh K.
    Raghunathan, Arvind U.
    Romeres, Diego
    [J]. CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [10] STOCHASTIC QUASI-NEWTON METHOD FOR NONCONVEX STOCHASTIC OPTIMIZATION
    Wang, Xiao
    Ma, Shiqian
    Goldfarb, Donald
    Liu, Wei
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2017, 27 (02) : 927 - 956