A deep reinforcement learning (DRL) based approach for well-testing interpretation to evaluate reservoir parameters

被引:15
|
作者
Dong, Peng [1 ]
Chen, Zhi-Ming [1 ,2 ]
Liao, Xin-Wei [1 ]
Yu, Wei [2 ]
机构
[1] China Univ Petr Beijing CUP, State Key Lab Petr Resources & Prospecting, Beijing 102249, Peoples R China
[2] Univ Texas Austin, Austin, TX 78731 USA
基金
北京市自然科学基金;
关键词
Well testing; Deep reinforcement learning; Automatic interpretation; Parameter evaluation; NEURAL-NETWORK; OPTIMIZATION; MODEL;
D O I
10.1016/j.petsci.2021.09.046
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Parameter inversions in oil/gas reservoirs based on well test interpretations are of great significance in oil/gas industry. Automatic well test interpretations based on artificial intelligence are the most promising to solve the problem of non-unique solution. In this work, a new deep reinforcement learning (DRL) based approach is proposed for automatic curve matching for well test interpretation, by using the double deep Q-network (DDQN). The DDQN algorithms are applied to train agents for automatic parameter tuning in three conventional well-testing models. In addition, to alleviate the dimensional disaster problem of parameter space, an asynchronous parameter adjustment strategy is used to train the agent. Finally, field applications are carried out by using the new DRL approaches. Results show that step number required for the DDQN to complete the curve matching is the least among, when comparing the naive deep Q-network (naive DQN) and deep Q-network (DQN). We also show that DDQN can improve the robustness of curve matching in comparison with supervised machine learning algorithms. Using DDQN algorithm to perform 100 curve matching tests on three traditional well test models, the results show that the mean relative error of the parameters is 7.58% for the homogeneous model, 10.66% for the radial composite model, and 12.79% for the dual porosity model. In the actual field application, it is found that a good curve fitting can be obtained with only 30 steps of parameter adjustment. (c) 2021 The Authors. Publishing services by Elsevier B.V. on behalf of KeAi Communications Co. Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/ 4.0/).
引用
下载
收藏
页码:264 / 278
页数:15
相关论文
共 50 条
  • [1] A deep reinforcement learning(DRL) based approach for well-testing interpretation to evaluate reservoir parameters
    Peng Dong
    Zhi-Ming Chen
    Xin-Wei Liao
    Wei Yu
    Petroleum Science, 2022, 19 (01) : 264 - 278
  • [2] A Deep Reinforcement Learning (DRL) Based Approach to SFC Request Scheduling in Computer Networks
    Nagireddy, Eesha
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (08) : 1062 - 1065
  • [3] Numerical Well-testing Model of Fractured-well in Low Permeability Reservoir Based on Mutative Permeability Effect
    Zhang Yuchen
    Zhou Chuning
    Cui Jingwen
    ADVANCES IN MECHATRONICS AND CONTROL ENGINEERING II, PTS 1-3, 2013, 433-435 : 1984 - 1987
  • [4] A Search-Based Testing Approach for Deep Reinforcement Learning Agents
    Zolfagharian, Amirhossein
    Abdellatif, Manel
    Briand, Lionel C.
    Bagherzadeh, Mojtaba
    Ramesh, S.
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2023, 49 (07) : 3715 - 3735
  • [5] A Deep Reinforcement Learning-Based Approach for Android GUI Testing
    Gao, Yuemeng
    Tao, Chuanqi
    Guo, Hongjing
    Gao, Jerry
    WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 : 262 - 276
  • [6] DRL-SRS: A Deep Reinforcement Learning Approach for Optimizing Spaced Repetition Scheduling
    Xiao, Qinfeng
    Wang, Jing
    APPLIED SCIENCES-BASEL, 2024, 14 (13):
  • [7] DRL-Tomo: a deep reinforcement learning-based approach to augmented data generation for network tomography
    Hou, Changsheng
    Hou, Bingnan
    Li, Xionglve
    Zhou, Tongqing
    Chen, Yingwen
    Cai, Zhiping
    COMPUTER JOURNAL, 2024, : 2995 - 3008
  • [8] DAR-DRL: A dynamic adaptive routing method based on deep reinforcement learning
    Rao, Zheheng
    Xu, Yanyan
    Yao, Ye
    Meng, Weizhi
    Computer Communications, 2024, 228
  • [9] Adversarial Attacks and Defense in Deep Reinforcement Learning (DRL)-Based Traffic Signal Controllers
    Haydari, Ammar
    Zhang, Michael
    Chuah, Chen-Nee
    IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 2 : 402 - 416
  • [10] ASPW-DRL: assembly sequence planning for workpieces via a deep reinforcement learning approach
    Zhao, Minghui
    Guo, Xian
    Zhang, Xuebo
    Fang, Yongchun
    Ou, Yongsheng
    ASSEMBLY AUTOMATION, 2020, 40 (01) : 65 - 75