An interpretable model for bridge scour risk assessment using explainable artificial intelligence and engineers' expertise

被引:0
|
作者
Wang, Tianyu [1 ,2 ]
Reiffsteck, Philippe [2 ]
Chevalier, Christophe [2 ]
Chen, Chi-Wei [1 ]
Schmidt, Franziska [3 ]
机构
[1] SNCF Reseau, Dept Ouvrages Art, EMF DET, DTR,DGII, F-93212 La Plaine St Denis, France
[2] Univ Gustave Eiffel, GERS SRO, Marne La Vallee, France
[3] Univ Gustave Eiffel, MAST EMGCU, Marne La Vallee, France
关键词
Engineering judgment; explainable artificial intelligence; machine learning; scour risk assessment; surrogate models; SMOTE;
D O I
10.1080/15732479.2023.2230564
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
A machine learning (ML) model would be difficult to apply in practice without knowing how the prediction is made. To address this issue, this paper uses XAI and engineers' expertise to interpret an ML model for bridge scour risk prediction. By using data from the French National Railway Company (SNCF), an extreme gradient boosting (XGBoost) algorithm-based ML model was constructed at first. Later, XAI approaches were employed to have global and local explanations as well as explicit expressions for ML model interpretation. Meanwhile, a group of engineers from SNCF were asked to rank the input parameters based on their engineering judgment. In the end, feature importance obtained from XAI approaches and engineers' survey was compared. It was found that for both XAI and engineers' interpretations, observation of local scour around the bridge foundation is the most important feature for decision-making. The differences between XAI interpretations and human expertise emphasize the importance of knowledge in hydrology and hydromorphology for scour risk assessment, since currently engineers make decisions primarily based on observed damages (e.g., scour hole, crack). The results of this paper could make the ML model trustworthy by understanding how the prediction is made and provide valuable guidance for improving current inspection procedures.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] ExplaiNAble BioLogical Age (ENABL Age): an artificial intelligence framework for interpretable biological age
    Qiu, Wei
    Chen, Hugh
    Kaeberlein, Matt
    Lee, Su-In
    LANCET HEALTHY LONGEVITY, 2023, 4 (12): : E711 - E723
  • [32] Explainable Artificial Intelligence (XAI) Model for Earthquake Spatial Probability Assessment in Arabian Peninsula
    Jena, Ratiranjan
    Shanableh, Abdallah
    Al-Ruzouq, Rami
    Pradhan, Biswajeet
    Gibril, Mohamed Barakat A.
    Khalil, Mohamad Ali
    Ghorbanzadeh, Omid
    Ganapathy, Ganapathy Pattukandan
    Ghamisi, Pedram
    REMOTE SENSING, 2023, 15 (09)
  • [33] Spatial flood susceptibility mapping using an explainable artificial intelligence (XAI) model
    Pradhan, Biswajeet
    Lee, Saro
    Dikshit, Abhirup
    Kim, Hyesu
    GEOSCIENCE FRONTIERS, 2023, 14 (06)
  • [34] An Explainable Artificial Intelligence Model for Detecting Xenophobic Tweets
    Perez-Landa, Gabriel Ichcanziho
    Loyola-Gonzalez, Octavio
    Medina-Perez, Miguel Angel
    APPLIED SCIENCES-BASEL, 2021, 11 (22):
  • [35] An Explainable Artificial Intelligence Model for Clustering Numerical Databases
    Loyola-Gonzalez, Octavio
    Eduardo Gutierrez-Rodriguez, Andres
    Angel Medina-Perez, Miguel
    Monroy, Raul
    Francisco Martinez-Trinidad, Jose
    Ariel Carrasco-Ochoa, Jesus
    Garcia-Borroto, Milton
    IEEE ACCESS, 2020, 8 (08): : 52370 - 52384
  • [36] Comprehensive approach for scour modelling using artificial intelligence
    Rathod, Praveen
    Manekar, Vivek L.
    MARINE GEORESOURCES & GEOTECHNOLOGY, 2023, 41 (03) : 312 - 326
  • [37] Positional assessment of lower third molar and mandibular canal using explainable artificial intelligence
    Kempers, Steven
    van Lierop, Pieter
    Hsu, Tzu-Ming Harry
    Moin, David Anssari
    Berge, Stefaan
    Ghaeminia, Hossein
    Xi, Tong
    Vinayahalingam, Shankeeth
    JOURNAL OF DENTISTRY, 2023, 133
  • [38] Explainable artificial intelligence for the automated assessment of the retinal vascular tortuosity
    Álvaro S. Hervella
    Lucía Ramos
    José Rouco
    Jorge Novo
    Marcos Ortega
    Medical & Biological Engineering & Computing, 2024, 62 : 865 - 881
  • [39] Coloring Molecules with Explainable Artificial Intelligence for Preclinical Relevance Assessment
    Jimenez-Luna, Jose
    Skalic, Miha
    Weskamp, Nils
    Schneider, Gisbert
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2021, 61 (03) : 1083 - 1094
  • [40] Artificial intelligence application to bridge painting assessment
    Chen, PH
    Chang, LM
    AUTOMATION IN CONSTRUCTION, 2003, 12 (04) : 431 - 445