Stable clustering of offshore downhole data using a combined k-means and Gaussian mixture modelling approach

被引:0
|
作者
Amrita Singh
Maheswar Ojha
机构
[1] CSIR-National Geophysical Research Institute,
来源
关键词
Unsupervised learning; Gaussian mixture model; Lithology; Andaman;
D O I
暂无
中图分类号
学科分类号
摘要
We use unsupervised machine learning techniques aided by the Gaussian mixture model (GMM) for clustering downhole data of gas hydrate reservoir in the Andaman Sea, where drilling and coring were done at Site 17 in 2006 under the first expedition of the Indian National Gas Hydrate Programme (NGHP-01). Six different logging data (namely; density, neutron porosity, gamma-ray, resistivity, P- and S-wave velocity) are used in this study. We obtain six clusters using Davies-Bouldin index, Calinski-Harabasz index, Dunn index, dendrogram and self-organizing map, which are verified by high silhouette values. Data are then clustered using k-means, principal component analysis (PCA) and GMM. We notice that the k-means with random initialization gets biased towards the dominant principal component (gamma-ray), whereas, PCA shows each log has optimal weightage. Based on statistical analysis using 100 runs, GMM with the k-means initialization provides better results than GMM with random initialization. However, it provides three possible configurations of six clusters, which become stable when a combination of six logs is used as another input. Six clusters are interpreted in terms of lithology by histogram analysis of the corresponding log values. Lithology is found clay-dominated sediments with little silt and sand, as well as scattered volcanic ash, carbonate ooze, and pyrite, which is consistent with the lithology determined by smear slide and sieve data. Except at a few depths with higher concentrations (20–50%) in volcanic glass and carbonate ooze, gas hydrate occupies about 10% of the pore space in silty-clay sediments with sand and volcanic ash.
引用
收藏
相关论文
共 50 条
  • [31] Key Frame Extraction and Foreground Modelling Using K-Means Clustering
    Nasreen, Azra
    Roy, Kaushik
    Roy, Kunal
    Shobha, G.
    [J]. PROCEEDINGS 7TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE, COMMUNICATION SYSTEMS AND NETWORKS CICSYN 2015, 2015, : 141 - 145
  • [32] Optimized big data K-means clustering using MapReduce
    Cui, Xiaoli
    Zhu, Pingfei
    Yang, Xin
    Li, Keqiu
    Ji, Changqing
    [J]. JOURNAL OF SUPERCOMPUTING, 2014, 70 (03): : 1249 - 1259
  • [33] Clones Clustering Using K-Means
    Ashish, Aveg
    [J]. PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND CONTROL (ISCO'16), 2016,
  • [34] Clones clustering using K-means
    Ashish, Aveg
    [J]. Proceedings of the 10th International Conference on Intelligent Systems and Control, ISCO 2016, 2016,
  • [35] Clustering Data in Power Management System Using k-Means Clustering Algorithm
    Aryani, Ressy
    Nasrun, Muhammad
    Setianingsih, Casi
    Murti, Muhammad Ary
    [J]. 2019 IEEE ASIA PACIFIC CONFERENCE ON WIRELESS AND MOBILE (APWIMOB), 2019, : 164 - 170
  • [36] A k-means approach to clustering disease progressions
    Duc Thanh Anh Luong
    Chandola, Varun
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI), 2017, : 268 - 274
  • [37] K-means Data Clustering with Memristor Networks
    Jeong, YeonJoo
    Lee, Jihang
    Moon, John
    Shin, Jong Hoon
    Lu, Wei D.
    [J]. NANO LETTERS, 2018, 18 (07) : 4447 - 4453
  • [38] Bagged K-means clustering of metabolome data
    Hageman, J. A.
    van den Berg, R. A.
    Westerhuis, J. A.
    Hoefsloot, H. C. J.
    Smilde, A. K.
    [J]. CRITICAL REVIEWS IN ANALYTICAL CHEMISTRY, 2006, 36 (3-4) : 211 - 220
  • [39] k-Means Clustering of Lines for Big Data
    Marom, Yair
    Feldman, Dan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [40] Hierarchical initialization approach for K-Means clustering
    Lu, J. F.
    Tang, J. B.
    Tang, Z. M.
    Yang, J. Y.
    [J]. PATTERN RECOGNITION LETTERS, 2008, 29 (06) : 787 - 795