An interpretable model for bridge scour risk assessment using explainable artificial intelligence and engineers' expertise

被引:0
|
作者
Wang, Tianyu [1 ,2 ]
Reiffsteck, Philippe [2 ]
Chevalier, Christophe [2 ]
Chen, Chi-Wei [1 ]
Schmidt, Franziska [3 ]
机构
[1] SNCF Reseau, Dept Ouvrages Art, EMF DET, DTR,DGII, F-93212 La Plaine St Denis, France
[2] Univ Gustave Eiffel, GERS SRO, Marne La Vallee, France
[3] Univ Gustave Eiffel, MAST EMGCU, Marne La Vallee, France
关键词
Engineering judgment; explainable artificial intelligence; machine learning; scour risk assessment; surrogate models; SMOTE;
D O I
10.1080/15732479.2023.2230564
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
A machine learning (ML) model would be difficult to apply in practice without knowing how the prediction is made. To address this issue, this paper uses XAI and engineers' expertise to interpret an ML model for bridge scour risk prediction. By using data from the French National Railway Company (SNCF), an extreme gradient boosting (XGBoost) algorithm-based ML model was constructed at first. Later, XAI approaches were employed to have global and local explanations as well as explicit expressions for ML model interpretation. Meanwhile, a group of engineers from SNCF were asked to rank the input parameters based on their engineering judgment. In the end, feature importance obtained from XAI approaches and engineers' survey was compared. It was found that for both XAI and engineers' interpretations, observation of local scour around the bridge foundation is the most important feature for decision-making. The differences between XAI interpretations and human expertise emphasize the importance of knowledge in hydrology and hydromorphology for scour risk assessment, since currently engineers make decisions primarily based on observed damages (e.g., scour hole, crack). The results of this paper could make the ML model trustworthy by understanding how the prediction is made and provide valuable guidance for improving current inspection procedures.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Explainable Artificial Intelligence for Interpretable Data Minimization
    Becker, Maximilian
    Toprak, Emrah
    Beyerer, Juergen
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 885 - 893
  • [2] Cybertrust: From Explainable to Actionable and Interpretable Artificial Intelligence
    Linkov, Igor
    Galaitsi, Stephanie
    Trump, Benjamin D.
    Keisler, Jeffrey M.
    Kott, Alexander
    COMPUTER, 2020, 53 (09) : 91 - 96
  • [3] A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications
    Basagaoglu, Hakan
    Chakraborty, Debaditya
    Do Lago, Cesar
    Gutierrez, Lilianna
    Sahinli, Mehmet Arif
    Giacomoni, Marcio
    Furl, Chad
    Mirchi, Ali
    Moriasi, Daniel
    Sengor, Sema Sevinc
    WATER, 2022, 14 (08)
  • [4] An interpretable schizophrenia diagnosis framework using machine learning and explainable artificial intelligence
    Shivaprasad, Samhita
    Chadaga, Krishnaraj
    Dias, Cifha Crecil
    Sampathila, Niranjana
    Prabhu, Srikanth
    SYSTEMS SCIENCE & CONTROL ENGINEERING, 2024, 12 (01)
  • [5] Toward Explainable and Interpretable Building Energy Modelling: An Explainable Artificial Intelligence Approach
    Zhang, Wei
    Liu, Fang
    Wen, Yonggang
    Nee, Bernard
    BUILDSYS'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, 2021, : 255 - 258
  • [6] Analysis and evaluation of explainable artificial intelligence on suicide risk assessment
    Hao Tang
    Aref Miri Rekavandi
    Dharjinder Rooprai
    Girish Dwivedi
    Frank M. Sanfilippo
    Farid Boussaid
    Mohammed Bennamoun
    Scientific Reports, 14
  • [7] Analysis and evaluation of explainable artificial intelligence on suicide risk assessment
    Tang, Hao
    Miri Rekavandi, Aref
    Rooprai, Dharjinder
    Dwivedi, Girish
    Sanfilippo, Frank M.
    Boussaid, Farid
    Bennamoun, Mohammed
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [8] Guest Editorial: New Developments in Explainable and Interpretable Artificial Intelligence
    Subbalakshmi K.P.S.
    Samek W.
    Hu X.B.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1427 - 1428
  • [9] Special issue on “Towards robust explainable and interpretable artificial intelligence”
    Stefania Tomasiello
    Feng Feng
    Yichuan Zhao
    Evolutionary Intelligence, 2024, 17 : 417 - 418
  • [10] Special issue on "Towards robust explainable and interpretable artificial intelligence"
    Tomasiello, Stefania
    Feng, Feng
    Zhao, Yichuan
    EVOLUTIONARY INTELLIGENCE, 2024, 17 (1) : 417 - 418