Explaining Black Box Drug Target Prediction Through Model Agnostic Counterfactual Samples

被引:2
|
作者
Nguyen, Tri Minh [1 ]
Quinn, Thomas P. [1 ]
Nguyen, Thin [1 ]
Tran, Truyen [1 ]
机构
[1] Deakin Univ, Appl Artificial Intelligence Inst, Burwood, Vic 3217, Australia
关键词
Drugs; Proteins; Predictive models; Biological system modeling; Reinforcement learning; Deep learning; Computational modeling; Black box deep learning; counterfactual explanation; drug-target affinity; substructure interaction; PDBBIND DATABASE;
D O I
10.1109/TCBB.2022.3190266
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Many high-performance DTA deep learning models have been proposed, but they are mostly black-box and thus lack human interpretability. Explainable AI (XAI) can make DTA models more trustworthy, and allows to distill biological knowledge from the models. Counterfactual explanation is one popular approach to explaining the behaviour of a deep neural network, which works by systematically answering the question "How would the model output change if the inputs were changed in this way?". We propose a multi-agent reinforcement learning framework, Multi-Agent Counterfactual Drug-target binding Affinity (MACDA), to generate counterfactual explanations for the drug-protein complex. Our proposed framework provides human-interpretable counterfactual instances while optimizing both the input drug and target for counterfactual generation at the same time. We benchmark the proposed MACDA framework using the Davis and PDBBind dataset and find that our framework produces more parsimonious explanations with no loss in explanation validity, as measured by encoding similarity. We then present a case study involving ABL1 and Nilotinib to demonstrate how MACDA can explain the behaviour of a DTA model in the underlying substructure interaction between inputs in its prediction, revealing mechanisms that align with prior domain knowledge.
引用
收藏
页码:1020 / 1029
页数:10
相关论文
共 50 条
  • [1] Explaining Black Box Reinforcement Learning Agents Through Counterfactual Policies
    Movin, Maria
    Dinis Junior, Guilherme
    Hollmen, Jaakko
    Papapetrou, Panagiotis
    ADVANCES IN INTELLIGENT DATA ANALYSIS XXI, IDA 2023, 2023, 13876 : 314 - 326
  • [2] Learning to Explain: A Model -Agnostic Framework for Explaining Black Box Models
    Barkan, Oren
    Asher, Yuval
    Eshel, Amit
    Elisha, Yelionatan
    Koenigstein, Noam
    23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 944 - 949
  • [3] Explaining the black-box smoothly-A counterfactual approach
    Singla, Sumedha
    Eslami, Motahhare
    Pollack, Brian
    Wallace, Stephen
    Batmanghelich, Kayhan
    MEDICAL IMAGE ANALYSIS, 2023, 84
  • [4] Explaining Black Box Models Through Twin Systems
    Cau, Federico Maria
    PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES COMPANION (IUI'20), 2020, : 27 - 28
  • [5] Query-Efficient Target-Agnostic Black-Box Attack
    Moraffah, Raha
    Liu, Huan
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 368 - 377
  • [6] SAM-DTA: a sequence -agnostic model for drug-target binding affinity prediction
    Hu, Zhiqiang
    Liu, Wenfeng
    Zhang, Chenbin
    Huang, Jiawen
    Zhang, Shaoting
    Yu, Huiqun
    Xiong, Yi
    Liu, Hao
    Ke, Song
    Hong, Liang
    BRIEFINGS IN BIOINFORMATICS, 2023, 24 (01)
  • [7] Explaining any black box model using real data
    Bjorklund, Anton
    Henelius, Andreas
    Oikarinen, Emilia
    Kallonen, Kimmo
    Puolamaki, Kai
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [8] MS-CPFI: A model-agnostic Counterfactual Perturbation Feature Importance algorithm for interpreting black-box Multi-State models
    Cottin, Aziliz
    Zulian, Marine
    Pecuchet, Nicolas
    Guilloux, Agathe
    Katsahian, Sandrine
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 147
  • [9] Explaining Black Boxes With a SMILE: Statistical Model-Agnostic Interpretability With Local Explanations
    Aslansefat, Koorosh
    Hashemian, Mojgan
    Walker, Martin
    Akram, Mohammed Naveed
    Sorokos, Ioannis
    Papadopoulos, Yiannis
    IEEE SOFTWARE, 2024, 41 (01) : 87 - 97
  • [10] Stable and actionable explanations of black-box models through factual and counterfactual rules
    Guidotti, Riccardo
    Monreale, Anna
    Ruggieri, Salvatore
    Naretto, Francesca
    Turini, Franco
    Pedreschi, Dino
    Giannotti, Fosca
    DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (05) : 2825 - 2862