Missing well-log reconstruction using a sequence self-attention deep-learning framework

被引:0
|
作者
Lin, Lei [1 ]
Wei, Hao [1 ]
Wu, Tiantian [1 ]
Zhang, Pengyun [2 ]
Zhong, Zhi [1 ]
Li, Chenglong [1 ]
机构
[1] China Univ Geosci, Key Lab Theory & Technol Petr Explorat & Dev Hube, Wuhan, Peoples R China
[2] China Oilfield Serv Ltd, Well Tech R&D Inst, Beijing, Peoples R China
关键词
NEURAL-NETWORK; OIL-FIELD; PREDICTION; LITHOFACIES; POROSITY; FLUID; BASIN;
D O I
10.1190/GEO2022-0757.1
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Well logging is a critical tool for reservoir evaluation and fluid identification. However, due to borehole conditions, instrument failure, economic constraints, etc., some types of well logs are occasionally missing or unreliable. Existing logging curve reconstruction methods based on empirical formulas and fully connected deep neural networks (FCDNN) can only consider point-to-point mapping relationships. Recurrently structured neural networks can consider a multipoint correlation, but it is difficult to compute in parallel. To take into account the correlation between log sequences and achieve computational parallelism, we develop a novel deep-learning framework for missing well-log reconstruction based on state-of-the-art transformer architecture. The missing well-log transformer (MWLT) uses a self-attention mechanism instead of a circular recursive structure to model the global dependencies of the inputs and outputs. To use different usage requirements, we design the MWLT in three scales: small, base, and large, by adjusting the parameters in the network. A total of 8609 samples from 209 wells in the Sichuan Basin, China, are used for training and validation, and two additional blind wells are used for testing. The data augmentation strategy with random starting points is implemented to increase the robustness of the model. The results show that our proposed MWLT achieves a significant improvement in accuracy over the conventional Gardner's equation and data-driven approaches such as FCDNN and bidirectional long short-term memory, on the validation data set and blind test wells. The MWLT-large and MWLT-base have lower prediction errors than MWLT-small but require more training time. Two wells in the Songliao Basin, China, are used to evaluate the cross-regional generalized performance of our method. The generalizability test results demonstrate that density logs reconstructed by MWLT remain the best match to the observed data compared with other methods. The parallelizable MWLT automatically learns the global dependence of the parameters of the subsurface reservoir, enabling an efficient missing well-log reconstruction performance.
引用
收藏
页码:D391 / D410
页数:20
相关论文
共 50 条
  • [41] Convergence of Deep Learning and Forensic Methodologies Using Self-attention Integrated EfficientNet Model for Deep Fake Detection
    Rimjhim Padam Singh
    Nichenametla Hima Sree
    Koti Leela Sai Praneeth Reddy
    Kandukuri Jashwanth
    SN Computer Science, 5 (8)
  • [42] Well-log decomposition using variational mode decomposition in assisting the sequence stratigraphy analysis of a conglomerate reservoir
    Xu, Zhaohui
    Zhang, Bo
    Li, Fangyu
    Cao, Gang
    Lin, Yuming
    GEOPHYSICS, 2018, 83 (04) : B221 - B228
  • [43] Self-attention transformer unit-based deep learning framework for skin lesions classification in smart healthcare
    Rezaee, Khosro
    Zadeh, Hossein Ghayoumi
    DISCOVER APPLIED SCIENCES, 2024, 6 (01)
  • [44] A deep-learning based solar irradiance forecast using missing data
    Shan, Shuo
    Xie, Xiangying
    Fan, Tao
    Xiao, Yushun
    Ding, Zhetong
    Zhang, Kanjian
    Wei, Haikun
    IET RENEWABLE POWER GENERATION, 2022, 16 (07) : 1462 - 1473
  • [45] UIESC: An Underwater Image Enhancement Framework via Self-Attention and Contrastive Learning
    Chen, Renzhang
    Cai, Zhanchuan
    Yuan, Jieyu
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (12) : 11701 - 11711
  • [46] Image Super-Resolution Reconstruction Method Based on Self-Attention Deep Network
    Chen Zihan
    Wu Haobo
    Pei Haodong
    Chen Rong
    Hu Jiaxin
    Shi Hengtong
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (04)
  • [47] Self-attention enabled deep learning of dihydrouridine (D) modification on mRNAs unveiled a distinct sequence signature from tRNAs
    Wang, Yue
    Wang, Xuan
    Cui, Xiaodong
    Meng, Jia
    Rong, Rong
    MOLECULAR THERAPY-NUCLEIC ACIDS, 2023, 31 : 411 - 420
  • [48] Full-field prediction of stress and fracture patterns in composites using deep learning and self-attention
    Chen, Yang
    Dodwell, Tim
    Chuaqui, Tomas
    Butler, Richard
    ENGINEERING FRACTURE MECHANICS, 2023, 286
  • [49] Compensating for Partial Doppler Velocity Log Outages by Using Deep-Learning Approaches
    Yona, Mor
    Klein, Itzik
    2021 IEEE INTERNATIONAL SYMPOSIUM ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2021), 2021,
  • [50] Kernel Self-Attention for Weakly-supervised Image Classification using Deep Multiple Instance Learning
    Rymarczyk, Dawid
    Borowa, Adriana
    Tabor, Jacek
    Zielinski, Bartosz
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 1720 - 1729