Evaluation of compression index of red mud by machine learning interpretability methods

被引:0
|
作者
Yang, Fan [1 ,2 ]
Zhang, Jieya [1 ,2 ]
Xie, Mingxing [1 ,2 ]
Cui, Wenwen [1 ,2 ]
Dong, Xiaoqiang [1 ,2 ]
机构
[1] Taiyuan Univ Technol, Coll Civil Engn, Taiyuan 030024, Peoples R China
[2] Shanxi Key Lab Civil Engn Disaster Prevent & Contr, Taiyuan 030024, Peoples R China
基金
中国国家自然科学基金;
关键词
Red mud; Compression index; Machine learning models; SHapley Additive exPlanations (SHAP); SUPPORT VECTOR MACHINE; PREDICTION; FORMULATION;
D O I
10.1016/j.compgeo.2025.107130
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The annual increase in red mud emissions necessitates the expansion of bauxite residue disposal areas (BRDAs), while the escalating land value has led to proposals for development on closed BRDAs. Therefore, understanding the compressive properties of red mud is critical for the safe management and construction of BRDAs. The process of deriving compression index (Cc) through consolidation tests to assess compression characteristics is both time-intensive and vulnerable to the quality of the sampling methods employed. Consequently, it is essential to develop predictive models for compression indices that utilize more easily measurable physical parameters. This study proposes the use of machine learning(ML) models to predict the Ccof red mud. Several machine learning models were studied, including Linear Regression (LR), Ridge Regression (RR), Support Vector Machines (SVR), Random Forest(RF), Extremely Randomized Trees (Extra Trees), K-Nearest Neighbors (KNN), Category Boosting (CatBoost), and LightGBM (Light Gradient Boosting Machine). The grid search algorithm was used to obtain the optimal parameters for each ML model, and k-fold cross-validation was employed to enhance the model's generalization performance. Ultimately, the KNN model achieved the best performance. The SHAP method was used to describe the specific influence patterns, and to provide a quantitative contribution of each feature to the Ccof red mud. The research indicated that the IL and wn exerted the most substantial influence on the Cc, yielding a positive effect.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning
    Kaur, Harmanpreet
    Nori, Harsha
    Jenkins, Samuel
    Caruana, Rich
    Wallach, Hanna
    Vaughan, Jennifer Wortman
    PROCEEDINGS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'20), 2020,
  • [32] Unsupervised Machine Learning Methods for City Vitality Index
    Dessureault, Jean-Sebastien
    Simard, Jonathan
    Massicotte, Daniel
    INTELLIGENT COMPUTING, VOL 2, 2022, 507 : 230 - 246
  • [33] Machine learning interpretability methods to characterize the importance of hematologic biomarkers in prognosticating patients with suspected infection
    Upadhyaya, Dipak P.
    Tarabichi, Yasir
    Prantzalos, Katrina
    Ayub, Salman
    Kaelber, David C.
    Sahoo, Satya S.
    Computers in Biology and Medicine, 2024, 183
  • [34] Guest Editorial: Special Issue on Information Theoretic Methods for the Generalization, Robustness, and Interpretability of Machine Learning
    Chen, Badong
    Yu, Shujian
    Jenssen, Robert
    Principe, Jose C.
    Mueller, Klaus-Robert
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (02) : 1957 - 1958
  • [35] A Survey of Interpretability Research Methods for Reinforcement Learning
    Cao, Hong-Ye
    Liu, Xiao
    Dong, Shao-Kang
    Yang, Shang-Dong
    Huo, Jing
    Li, Wen-Bin
    Gao, Yang
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (08): : 1853 - 1882
  • [36] Interpretability of Machine Learning: Recent Advances and Future Prospects
    Gao, Lei
    Guan, Ling
    IEEE MULTIMEDIA, 2023, 30 (04) : 105 - 118
  • [37] Machine Learning Interpretability to Detect Fake Accounts in Instagram
    Sallah, Amine
    Alaoui, El Arbi Abdellaoui
    Agoujil, Said
    Nayyar, Anand
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY AND PRIVACY, 2022, 16 (01)
  • [38] Video Quality Assessment and Machine Learning: Performance and Interpretability
    Sogaard, Jacob
    Forchhammer, Soren
    Korhonen, Jari
    2015 SEVENTH INTERNATIONAL WORKSHOP ON QUALITY OF MULTIMEDIA EXPERIENCE (QOMEX), 2015,
  • [39] Interpretability and Explainability of Machine Learning Models: Achievements and Challenges
    Henriques, J.
    Rocha, T.
    de Carvalho, P.
    Silva, C.
    Paredes, S.
    INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2022, ICBHI 2022, 2024, 108 : 81 - 94
  • [40] Philosophy of science at sea: Clarifying the interpretability of machine learning
    Beisbart, Claus
    Raz, Tim
    PHILOSOPHY COMPASS, 2022, 17 (06)