Missing well-log reconstruction using a sequence self-attention deep-learning framework

被引:0
|
作者
Lin, Lei [1 ]
Wei, Hao [1 ]
Wu, Tiantian [1 ]
Zhang, Pengyun [2 ]
Zhong, Zhi [1 ]
Li, Chenglong [1 ]
机构
[1] China Univ Geosci, Key Lab Theory & Technol Petr Explorat & Dev Hube, Wuhan, Peoples R China
[2] China Oilfield Serv Ltd, Well Tech R&D Inst, Beijing, Peoples R China
关键词
NEURAL-NETWORK; OIL-FIELD; PREDICTION; LITHOFACIES; POROSITY; FLUID; BASIN;
D O I
10.1190/GEO2022-0757.1
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Well logging is a critical tool for reservoir evaluation and fluid identification. However, due to borehole conditions, instrument failure, economic constraints, etc., some types of well logs are occasionally missing or unreliable. Existing logging curve reconstruction methods based on empirical formulas and fully connected deep neural networks (FCDNN) can only consider point-to-point mapping relationships. Recurrently structured neural networks can consider a multipoint correlation, but it is difficult to compute in parallel. To take into account the correlation between log sequences and achieve computational parallelism, we develop a novel deep-learning framework for missing well-log reconstruction based on state-of-the-art transformer architecture. The missing well-log transformer (MWLT) uses a self-attention mechanism instead of a circular recursive structure to model the global dependencies of the inputs and outputs. To use different usage requirements, we design the MWLT in three scales: small, base, and large, by adjusting the parameters in the network. A total of 8609 samples from 209 wells in the Sichuan Basin, China, are used for training and validation, and two additional blind wells are used for testing. The data augmentation strategy with random starting points is implemented to increase the robustness of the model. The results show that our proposed MWLT achieves a significant improvement in accuracy over the conventional Gardner's equation and data-driven approaches such as FCDNN and bidirectional long short-term memory, on the validation data set and blind test wells. The MWLT-large and MWLT-base have lower prediction errors than MWLT-small but require more training time. Two wells in the Songliao Basin, China, are used to evaluate the cross-regional generalized performance of our method. The generalizability test results demonstrate that density logs reconstructed by MWLT remain the best match to the observed data compared with other methods. The parallelizable MWLT automatically learns the global dependence of the parameters of the subsurface reservoir, enabling an efficient missing well-log reconstruction performance.
引用
收藏
页码:D391 / D410
页数:20
相关论文
共 50 条
  • [21] SAR Image Reconstruction Method for Target Detection Using Self-Attention CNN-Based Deep Prior Learning
    Li, Min
    Huo, Weibo
    Wu, Junjie
    Yang, Jianyu
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [22] SADeepery: a deep learning framework for protein crystallization propensity prediction using self-attention and auto-encoder networks
    Wang, Shaokai
    Zhao, Haochen
    BRIEFINGS IN BIOINFORMATICS, 2022, 23 (05)
  • [23] A revised biostratigraphic and well-log sequence-stratigraphic framework for the Scotian Margin, offshore eastern Canada
    Weston, Janice F.
    MacRae, R. Andrew
    Ascoli, Piero
    Cooper, M. Kevin E.
    Fensome, Robert A.
    Shaw, David
    Williams, Graham L.
    CANADIAN JOURNAL OF EARTH SCIENCES, 2012, 49 (12) : 1417 - 1462
  • [24] Enhanced Colorectal Cancer Detection and Localization using Self-Attention Mechanisms in Deep Learning
    Gurumoorthi, T.
    Logesh, P.
    Ismail, N. Mohamed
    Malathi, K.
    2ND INTERNATIONAL CONFERENCE ON SUSTAINABLE COMPUTING AND SMART SYSTEMS, ICSCSS 2024, 2024, : 1589 - 1594
  • [25] BAMS: Binary Sequence-Augmented Spectrogram with Self-Attention Deep Learning for Human Activity Recognition
    Sricom, Natchaya
    Charakorn, Rujikorn
    Manoonpong, Poramate
    Limpiti, Tulaya
    2024 IEEE 20TH INTERNATIONAL CONFERENCE ON BODY SENSOR NETWORKS, BSN, 2024,
  • [26] SatCoBiLSTM: Self-attention based hybrid deep learning framework for crisis event detection in social media
    Upadhyay, Abhishek
    Meena, Yogesh Kumar
    Chauhan, Ganpat Singh
    Expert Systems with Applications, 2024, 249
  • [27] SADeepSense: Self-Attention Deep Learning Framework for Heterogeneous On-Device Sensors in Internet of Things Applications
    Yao, Shuochao
    Zhao, Yiran
    Shao, Huajie
    Liu, Dongxin
    Liu, Shengzhong
    Hao, Yifan
    Piao, Ailing
    Hu, Shaohan
    Lu, Su
    Abdelzaher, Tarek F.
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2019), 2019, : 1243 - 1251
  • [28] SatCoBiLSTM: Self-attention based hybrid deep learning framework for crisis event detection in social media
    Upadhyay, Abhishek
    Meena, Yogesh Kumar
    Chauhan, Ganpat Singh
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [29] Synthetic Slowness Shear Well-Log Prediction Using Supervised Machine Learning Models
    Tamoto, Hugo
    Contreras, Rodrigo Colnago
    dos Santos, Franciso Lledo
    Viana, Monique Simplicio
    Gioria, Rafael dos Santos
    Carneiro, Cleyton de Carvalho
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2022, PT I, 2023, 13588 : 115 - 130
  • [30] Depression Detection Based on Hybrid Deep Learning SSCL Framework Using Self-Attention Mechanism: An Application to Social Networking Data
    Nadeem, Aleena
    Naveed, Muhammad
    Satti, Muhammad Islam
    Afzal, Hammad
    Ahmad, Tanveer
    Kim, Ki-Il
    SENSORS, 2022, 22 (24)