Robust Optimal Well Control using an Adaptive Multigrid Reinforcement Learning Framework

被引:4
|
作者
Dixit, Atish [1 ]
Elsheikh, Ahmed H. [1 ]
机构
[1] Heriot Watt Univ, Edinburgh, Midlothian, Scotland
基金
英国工程与自然科学研究理事会;
关键词
Reinforcement learning; Adaptive; Multigrid framework; Transfer learning; Robust optimal control; OPTIMIZATION;
D O I
10.1007/s11004-022-10033-x
中图分类号
P [天文学、地球科学];
学科分类号
07 ;
摘要
Reinforcement learning (RL) is a promising tool for solving robust optimal well control problems where the model parameters are highly uncertain and the system is partially observable in practice. However, the RL of robust control policies often relies on performing a large number of simulations. This could easily become computationally intractable for cases with computationally intensive simulations. To address this bottleneck, an adaptive multigrid RL framework is introduced which is inspired by principles of geometric multigrid methods used in iterative numerical algorithms. RL control policies are initially learned using computationally efficient low-fidelity simulations with coarse grid discretization of the underlying partial differential equations (PDEs). Subsequently, the simulation fidelity is increased in an adaptive manner towards the highest fidelity simulation that corresponds to the finest discretization of the model domain. The proposed framework is demonstrated using a state-of-the-art, model-free policy-based RL algorithm, namely the proximal policy optimization algorithm. Results are shown for two case studies of robust optimal well control problems, which are inspired from SPE-10 model 2 benchmark case studies. Prominent gains in computational efficiency are observed using the proposed framework, saving around 60-70% of the computational cost of its single fine-grid counterpart.
引用
收藏
页码:345 / 375
页数:31
相关论文
共 50 条
  • [21] Robust control for affine nonlinear systems under the reinforcement learning framework
    Guo, Wenxin
    Qin, Weiwei
    Lan, Xuguang
    Liu, Jieyu
    Zhang, Zhaoxiang
    [J]. NEUROCOMPUTING, 2024, 587
  • [22] Robust reinforcement learning control
    Kretchmar, RM
    Young, PM
    Anderson, CW
    Hittle, DC
    Anderson, ML
    Tu, J
    Delnero, CC
    [J]. PROCEEDINGS OF THE 2001 AMERICAN CONTROL CONFERENCE, VOLS 1-6, 2001, : 902 - 907
  • [23] Resilient adaptive optimal control of distributed multi-agent systems using reinforcement learning
    Moghadam, Rohollah
    Modares, Hamidreza
    [J]. IET CONTROL THEORY AND APPLICATIONS, 2018, 12 (16): : 2165 - 2174
  • [24] Reinforcement Learning and Feedback Control USING NATURAL DECISION METHODS TO DESIGN OPTIMAL ADAPTIVE CONTROLLERS
    Lewis, Frank L.
    Vrabie, Draguna
    Vamvoudakis, Kyriakos G.
    [J]. IEEE CONTROL SYSTEMS MAGAZINE, 2012, 32 (06): : 76 - 105
  • [25] Reinforcement Learning-Based Adaptive Optimal Control for Partially Unknown Systems Using Differentiator
    Guo, Xinxin
    Yan, Weisheng
    Cui, Rongxin
    [J]. 2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 2018, : 1039 - 1044
  • [26] PRACTICAL ADAPTIVE ITERATIVE LEARNING CONTROL FRAMEWORK BASED ON ROBUST ADAPTIVE APPROACH
    Chen, Weisheng
    Li, Junmin
    Li, Jing
    [J]. ASIAN JOURNAL OF CONTROL, 2011, 13 (01) : 85 - 93
  • [27] Nonlinear Optimal Control Using Deep Reinforcement Learning
    Bucci, Michele Alessandro
    Semeraro, Onofrio
    Allauzen, Alexandre
    Cordier, Laurent
    Mathelin, Lionel
    [J]. IUTAM LAMINAR-TURBULENT TRANSITION, 2022, 38 : 279 - 290
  • [28] Optimal and Autonomous Control Using Reinforcement Learning: A Survey
    Kiumarsi, Bahare
    Vamvoudakis, Kyriakos G.
    Modares, Hamidreza
    Lewis, Frank L.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (06) : 2042 - 2062
  • [29] Optimal control of ship unloaders using reinforcement learning
    Scardua, LA
    Da Cruz, JJ
    Costa, AHR
    [J]. ADVANCED ENGINEERING INFORMATICS, 2002, 16 (03) : 217 - 227
  • [30] Reinforcement Learning for Adaptive Optimal Stationary Control of Linear Stochastic Systems
    Pang, Bo
    Jiang, Zhong-Ping
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (04) : 2383 - 2390