REINFORCEMENT LEARNING FROM PIXELS: WATERFLOODING OPTIMIZATION

被引:0
|
作者
Miftakhov, Ruslan [1 ]
Efremov, Igor [1 ]
Al-Qasim, Abdulaziz S. [2 ]
机构
[1] GridPoint Dynam, Moscow, Russia
[2] Saudi Aramco, Dhahran, Eastern Provinc, Saudi Arabia
关键词
Reinforcement Learning; Optimization; Reservoir Simulation; Waterflooding; PERFORMANCE; FIELD; FLOW;
D O I
暂无
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
The application of Artificial Intelligence (AI) methods in the petroleum industry gain traction in recent years. In this paper, Deep Reinforcement Learning (RL) is used to maximize the Net Present Value ( NPV) of waterflooding by changing the water injection rate. This research is the first step towards showing that the use of pixel information for reinforcement learning provides many advantages, such as a fundamental understanding of reservoir physics by controlling changes in pressure and saturation without directly accounting for the reservoir petrophysical properties and wells. The optimization routine based on RL by pixel data is tested on the 2D model, which is a vertical section of the SPE 10 model. It has been shown that RL can optimize waterflooding in a 2D compressible reservoir with the 2-phase flow (oil-water). The proposed optimization method is an iterative process. In the first few thousands of updates, NPV remains in the baseline since it takes more time to converge from raw pixel data than to use classical well production/injection rate information. RL optimization resulted in improving the NPV by 15 percent, where the optimum scenario shows less watercut values and more stable production in contrast to baseline optimization. Additionally, we evaluated the impact of selecting the different action set for optimization and examined two cases where water injection well can change injection pressure with a step of 200 psi and 600 psi. The results show that in the second case, RL optimization is exploiting the limitation of the reservoir simulation engine and tries to imitate a cycled injection regime, which results in a 7% higher NPV than the first case.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] A reinforcement learning approach for waterflooding optimization in petroleum reservoirs
    Hourfar, Farzad
    Bidgoly, Hamed Jalaly
    Moshiri, Behzad
    Salahshoor, Karim
    Elkamel, Ali
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2019, 77 : 98 - 116
  • [2] RLAD: Reinforcement Learning From Pixels for Autonomous Driving in Urban Environments
    Coelho, Daniel
    Oliveira, Miguel
    Santos, Vitor
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (04) : 7427 - 7435
  • [3] Stabilizing Off-Policy Deep Reinforcement Learning from Pixels
    Cetin, Edoardo
    Ball, Philip J.
    Roberts, Steve
    Celiktutan, Oya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [4] Does Self-supervised Learning Really Improve Reinforcement Learning from Pixels?
    Li, Xiang
    Shang, Jinghuan
    Das, Srijan
    Ryoo, Michael S.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Combined data augmentation framework for generalizing deep reinforcement learning from pixels
    Xiong, Xi
    Shen, Chun
    Wu, Junhong
    Lu, Shuai
    Zhang, Xiaodan
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 264
  • [6] Reinforcement learning with raw image pixels as input state
    Ernst, Damien
    Maree, Raphael
    Wehenkel, Louis
    ADVANCES IN MACHINE VISION, IMAGE PROCESSING, AND PATTERN ANALYSIS, 2006, 4153 : 446 - 454
  • [7] Fast and Data Efficient Reinforcement Learning from Pixels via Non-parametric Value Approximation
    Long, Alexander
    Blair, Alan
    van Hoof, Herke
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7620 - 7627
  • [8] Learning from Pixels with Expert Observations
    Minh-Huy Hoang
    Long Dinh
    Hai Nguyen
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1200 - 1206
  • [9] Optimization for Reinforcement Learning: From a single agent to cooperative agents
    Lee, Donghwan
    He, Niao
    Kamalaruban, Parameswaran
    Cevher, Volkan
    IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (03) : 123 - 135
  • [10] Learning Global Optimization by Deep Reinforcement Learning
    da Silva Filho, Moesio Wenceslau
    Barbosa, Gabriel A.
    Miranda, Pericles B. C.
    INTELLIGENT SYSTEMS, PT II, 2022, 13654 : 417 - 433