Elucidating precipitation in FeCrAl alloys through explainable AI: A case study

被引:7
|
作者
Ravi, Sandipp Krishnan [1 ]
Roy, Indranil [1 ]
Roychowdhury, Subhrajit [1 ]
Feng, Bojun [1 ]
Ghosh, Sayan [1 ]
Reynolds, Christopher [1 ]
V. Umretiya, Rajnikant [1 ]
Rebak, Raul B. [1 ]
Hoffman, Andrew K. [1 ]
机构
[1] GE Res, Niskayuna, NY 12309 USA
关键词
Explainable AI (XAI); Shapley Additive Explanations (SHAP); Material informatics; FeCrAl alloy; Precipitation; Age hardening; Nuclear cladding; PHASE-SEPARATION KINETICS; DEGREES-C EMBRITTLEMENT; MECHANICAL-PROPERTIES; ODS ALLOY; AL; ALPHA';
D O I
10.1016/j.commatsci.2023.112440
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
A primary challenge of using FeCrAl in high temperature industrial settings is the formation of & alpha;& PRIME;-precipitates that causes brittleness in the alloy, resulting in failure through fracture. The precipitation causes hardness change during thermal aging which is sensitive to both alloy composition and experimental condition (i.e., temperature and time of heat treatment). A Gaussian Process Regression (GPR) model is built on the hardness data collected at GE Research. Subsequently, for the first time, SHapley Additive exPlanations (SHAP) built upon the GPR is used as an Explainable Artificial Intelligence (XAI) tool to understand the effect of feature values in driving the hardness change. SHAP analysis has confirmed that the primary chemical driver for & alpha;& PRIME; age hardening in the FeCrAl system is Cr as expected. However, the analysis also indicated that Al does not have a clear trend of only suppressing the formation of & alpha;& PRIME; which contradicts current literature. This lack of a trend on the effect of Al on age hardening may be due to Al ability to both suppress thermodynamically and enhance kinetically the formation of & alpha;& PRIME;. Similarly, SHAP analysis points towards Mo having no clear trend towards either enhancing or suppressing & alpha;& PRIME;. This study indicates that more in depth studies focusing on both the chemistry and different aging temperatures (to study kinetics) should be performed to better understand the aging of this system.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Explainable AI to understand study interest of engineering students
    Ghosh, Sourajit
    Kamal, Md. Sarwar
    Chowdhury, Linkon
    Neogi, Biswarup
    Dey, Nilanjan
    Sherratt, Robert Simon
    EDUCATION AND INFORMATION TECHNOLOGIES, 2024, 29 (04) : 4657 - 4672
  • [32] Explainable AI to understand study interest of engineering students
    Sourajit Ghosh
    Md. Sarwar Kamal
    Linkon Chowdhury
    Biswarup Neogi
    Nilanjan Dey
    Robert Simon Sherratt
    Education and Information Technologies, 2024, 29 : 4657 - 4672
  • [33] Towards unveiling sensitive and decisive patterns in explainable AI with a case study in geometric deep learning
    Zhu, Jiajun
    Miao, Siqi
    Ying, Rex
    Li, Pan
    NATURE MACHINE INTELLIGENCE, 2025, : 471 - 483
  • [34] Study of microstructure and corrosion resistance of FeCrAl-Gd alloys
    Liu, Rui
    Sun, Hongliang
    Jiang, Xiaosong
    Liu, Xili
    Yan, Weiwen
    MATERIALS CHEMISTRY AND PHYSICS, 2023, 297
  • [35] Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining useful life
    Kobayashi, Kazuma
    Alam, Syed Bahauddin
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 129
  • [36] Can I Trust My Anomaly Detection System? A Case Study Based on Explainable AI
    Rashid, Muhammad
    Amparorel, Elvio
    Ferrari, Enrico
    Verda, Damiano
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2024, PT IV, 2024, 2156 : 243 - 254
  • [37] Explainable AI for Multimodal Credibility Analysis: Case Study of Online Beauty Health (Mis)-Information
    Wagle, Vidisha
    Kaur, Kulveen
    Kamat, Pooja
    Patil, Shruti
    Kotecha, Ketan
    IEEE ACCESS, 2021, 9 : 127985 - 128022
  • [38] Exploring the Role of Explainable AI in the Development and Qualification of Aircraft Quality Assurance Processes: A Case Study
    Milckel, Bjorn
    Dinglinger, Pascal
    Holtmann, Jonas
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2024, PT IV, 2024, 2156 : 331 - 352
  • [39] Trustworthy and explainable AI achieved through knowledge graphs and social implementation
    Fuji, Masaru
    Nakazawa, Katsuhito
    Yoshida, Hiroaki
    1600, Fujitsu Ltd (56): : 39 - 45
  • [40] Identifying Insecure Network Configurations Through Attack Modeling and Explainable AI
    Thomas, Blessy
    Thampi, Sabu M.
    Mukherjee, Preetam
    INFORMATION SYSTEMS SECURITY, ICISS 2024, 2025, 15416 : 201 - 212