CoaDTI: multi-modal co-attention based framework for drug-target interaction annotation

被引:17
|
作者
Huang, Lei [1 ]
Lin, Jiecong [1 ]
Liu, Rui [1 ]
Zheng, Zetian [2 ]
Meng, Lingkuan [2 ]
Chen, Xingjian [1 ]
Li, Xiangtao [1 ]
Wong, Ka-Chun [2 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[2] Jilin Univ, Sch Artificial Intelligence, Jilin, Jilin, Peoples R China
基金
中国国家自然科学基金;
关键词
Drug-target interaction; co-attention; multi-mode; deep learning;
D O I
10.1093/bib/bbac446
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation The identification of drug-target interactions (DTIs) plays a vital role for in silico drug discovery, in which the drug is the chemical molecule, and the target is the protein residues in the binding pocket. Manual DTI annotation approaches remain reliable; however, it is notoriously laborious and time-consuming to test each drug-target pair exhaustively. Recently, the rapid growth of labelled DTI data has catalysed interests in high-throughput DTI prediction. Unfortunately, those methods highly rely on the manual features denoted by human, leading to errors. Results Here, we developed an end-to-end deep learning framework called CoaDTI to significantly improve the efficiency and interpretability of drug target annotation. CoaDTI incorporates the Co-attention mechanism to model the interaction information from the drug modality and protein modality. In particular, CoaDTI incorporates transformer to learn the protein representations from raw amino acid sequences, and GraphSage to extract the molecule graph features from SMILES. Furthermore, we proposed to employ the transfer learning strategy to encode protein features by pre-trained transformer to address the issue of scarce labelled data. The experimental results demonstrate that CoaDTI achieves competitive performance on three public datasets compared with state-of-the-art models. In addition, the transfer learning strategy further boosts the performance to an unprecedented level. The extended study reveals that CoaDTI can identify novel DTIs such as reactions between candidate drugs and severe acute respiratory syndrome coronavirus 2-associated proteins. The visualization of co-attention scores can illustrate the interpretability of our model for mechanistic insights. Availability Source code are publicly available at https://github.com/Layne-Huang/CoaDTI.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] A co-attention based multi-modal fusion network for review helpfulness prediction
    Ren, Gang
    Diao, Lei
    Guo, Fanjia
    Hong, Taeho
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (01)
  • [2] Drug target interaction prediction via multi-task co-attention
    Weng, Yuyou
    Liu, Xinyi
    Li, Hui
    Lin, Chen
    Liang, Yun
    [J]. INTERNATIONAL JOURNAL OF DATA MINING AND BIOINFORMATICS, 2020, 24 (02) : 160 - 176
  • [3] Multi-modal co-attention relation networks for visual question answering
    Zihan Guo
    Dezhi Han
    [J]. The Visual Computer, 2023, 39 : 5783 - 5795
  • [4] Multi-Modal Co-Attention Capsule Network for Fake News Detection
    Yin, Chunyan
    Chen, Yongheng
    [J]. OPTICAL MEMORY AND NEURAL NETWORKS, 2024, 33 (01) : 13 - 27
  • [5] Multi-modal co-attention relation networks for visual question answering
    Guo, Zihan
    Han, Dezhi
    [J]. VISUAL COMPUTER, 2023, 39 (11): : 5783 - 5795
  • [6] Multi-Modal Co-Attention Capsule Network for Fake News Detection
    [J]. Optical Memory and Neural Networks, 2024, 33 : 13 - 27
  • [7] Multi-Modal Co-Attention Capsule Network for Fake News Detection
    Chunyan Yin
    Yongheng Chen
    [J]. Optical Memory and Neural Networks (Information Optics), 2024, 33 (01): : 13 - 27
  • [8] Joint Gated Co-Attention Based Multi-Modal Networks for Subregion House Price Prediction
    Wang, Pengkun
    Ge, Chuancai
    Zhou, Zhengyang
    Wang, Xu
    Li, Yuantao
    Wang, Yang
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (02) : 1667 - 1680
  • [9] Drug Target Interaction Prediction using Multi-task Learning and Co-attention
    Weng, Yuyou
    Lin, Chen
    Zeng, Xiangxiang
    Liang, Yun
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2019, : 528 - 533
  • [10] Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question Answering
    Yu, Zhou
    Yu, Jun
    Fan, Jianping
    Tao, Dacheng
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1839 - 1848