Explaining reaction coordinates of alanine dipeptide isomerization obtained from deep neural networks using Explainable Artificial Intelligence (XAI)

被引:30
|
作者
Kikutsuji, Takuma [1 ]
Mori, Yusuke [1 ]
Okazaki, Kei-ichi [2 ,3 ]
Mori, Toshifumi [4 ,5 ]
Kim, Kang [1 ]
Matubayasi, Nobuyuki [1 ]
机构
[1] Osaka Univ, Grad Sch Engn Sci, Dept Mat Engn Sci, Div Chem Engn, Toyonaka, Osaka 5608531, Japan
[2] Inst Mol Sci, Res Ctr Computat Sci, Okazaki, Aichi 4448585, Japan
[3] Grad Univ Adv Studies, Sokendai, Okazaki, Aichi 4448585, Japan
[4] Kyushu Univ, Inst Mat Chem & Engn, Kasuga, Fukuoka 8168580, Japan
[5] Kyushu Univ, Interdisciplinary Grad Sch Engn Sci, Kasuga, Fukuoka 8168580, Japan
来源
JOURNAL OF CHEMICAL PHYSICS | 2022年 / 156卷 / 15期
关键词
COLLECTIVE VARIABLES; KINETIC PATHWAYS; TRANSITION PATHS; NUCLEATION; MECHANISM; DYNAMICS; MAXIMIZATION; SURFACE;
D O I
10.1063/5.0087310
中图分类号
O64 [物理化学(理论化学)、化学物理学];
学科分类号
070304 ; 081704 ;
摘要
A method for obtaining appropriate reaction coordinates is required to identify transition states distinguishing the product and reactant in complex molecular systems. Recently, abundant research has been devoted to obtaining reaction coordinates using artificial neural networks from deep learning literature, where many collective variables are typically utilized in the input layer. However, it is difficult to explain the details of which collective variables contribute to the predicted reaction coordinates owing to the complexity of the nonlinear functions in deep neural networks. To overcome this limitation, we used Explainable Artificial Intelligence (XAI) methods of the Local Interpretable Model-agnostic Explanation (LIME) and the game theory-based framework known as Shapley Additive exPlanations (SHAP). We demonstrated that XAI enables us to obtain the degree of contribution of each collective variable to reaction coordinates that is determined by nonlinear regressions with deep learning for the committor of the alanine dipeptide isomerization in vacuum. In particular, both LIME and SHAP provide important features to the predicted reaction coordinates, which are characterized by appropriate dihedral angles consistent with those previously reported from the committor test analysis. The present study offers an AI-aided framework to explain the appropriate reaction coordinates, which acquires considerable significance when the number of degrees of freedom increases. (C) 2022 Author(s).
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Unveiling interatomic distances influencing the reaction coordinates in alanine dipeptide isomerization: An explainable deep learning approach
    Okada, Kazushi
    Kikutsuji, Takuma
    Okazaki, Kei-ichi
    Mori, Toshifumi
    Kim, Kang
    Matubayasi, Nobuyuki
    JOURNAL OF CHEMICAL PHYSICS, 2024, 160 (17):
  • [2] Improving deep learning performance by using Explainable Artificial Intelligence (XAI) approaches
    Bento V.
    Kohler M.
    Diaz P.
    Mendoza L.
    Pacheco M.A.
    Discover Artificial Intelligence, 2021, 1 (01):
  • [3] Detection of sickle cell disease using deep neural networks and explainable artificial intelligence
    Goswami, Neelankit Gautam
    Goswami, Anushree
    Sampathila, Niranjana
    Bairy, Muralidhar G.
    Chadaga, Krishnaraj
    Belurkar, Sushma
    JOURNAL OF INTELLIGENT SYSTEMS, 2024, 33 (01)
  • [4] Explainable Artificial Intelligence for Mechanics: Physics-Explaining Neural Networks for Constitutive Models
    Koeppe, Arnd
    Bamer, Franz
    Selzer, Michael
    Nestler, Britta
    Markert, Bernd
    FRONTIERS IN MATERIALS, 2022, 8
  • [5] Improvement in Deep Networks for Optimization Using eXplainable Artificial Intelligence
    Lee, Jin Ha
    Shin, Ik Hee
    Jeong, Sang Gu
    Lee, Seung-Ik
    Zaheer, Muhamamad Zaigham
    2019 10TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC): ICT CONVERGENCE LEADING THE AUTONOMOUS FUTURE, 2019, : 525 - 530
  • [6] Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks
    Nazir, Sajid
    Dickson, Diane M.
    Akram, Muhammad Usman
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 156
  • [7] Explaining Probabilistic Artificial Intelligence (AI) Models by Discretizing Deep Neural Networks
    Saleem, Rabia
    Yuan, Bo
    Kurugollu, Fatih
    Anjum, Ashiq
    2020 IEEE/ACM 13TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING (UCC 2020), 2020, : 446 - 448
  • [8] An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks
    Bhakte, Abhijit
    Pakkiriswamy, Venkatesh
    Srinivasan, Rajagopalan
    CHEMICAL ENGINEERING SCIENCE, 2022, 250
  • [9] Improving Transparency and Explainability of Deep Learning based IoT Botnet Detection using Explainable Artificial Intelligence (XAI)
    Kalakoti, Rajesh
    Mi, Sven
    Bahsi, Hayretdin
    22ND IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA 2023, 2023, : 595 - 601
  • [10] Unboxing Deep Learning Model of Food Delivery Service Reviews Using Explainable Artificial Intelligence (XAI) Technique
    Adak, Anirban
    Pradhan, Biswajeet
    Shukla, Nagesh
    Alamri, Abdullah
    FOODS, 2022, 11 (14)