Multimodal Representation Learning via Graph Isomorphism Network for Toxicity Multitask Learning

被引:0
|
作者
Wang, Guishen [1 ]
Feng, Hui [1 ]
Du, Mengyan [1 ]
Feng, Yuncong [1 ]
Cao, Chen [2 ]
机构
[1] School of Computer Science and Engineering, Changchun University of Technology, North Yuanda Street No. 3000, Jilin, Changchun,130012, China
[2] Key Laboratory for Bio-Electromagnetic Environment and Advanced Medical Theranostics, School of Biomedical Engineering and Informatics, Nanjing Medical University, Longmian Avenue No. 101, Jiangsu, Nanjing,211166, China
基金
中国国家自然科学基金;
关键词
Adversarial machine learning - Deep learning - Federated learning - Feedforward neural networks - Graph neural networks;
D O I
10.1021/acs.jcim.4c01061
中图分类号
学科分类号
摘要
Toxicity is paramount for comprehending compound properties, particularly in the early stages of drug design. Due to the diversity and complexity of toxic effects, it became a challenge to compute compound toxicity tasks. To address this issue, we propose a multimodal representation learning model, termed multimodal graph isomorphism network (MMGIN), to address this challenge for compound toxicity multitask learning. Based on fingerprints and molecular graphs of compounds, our MMGIN model incorporates a multimodal representation learning model to acquire a comprehensive compound representation. This model adopts a two-channel structure to independently learn fingerprint representation and molecular graph representation. Subsequently, two feedforward neural networks utilize the learned multimodal compound representation to perform multitask learning, encompassing compound toxicity classification and multiple compound category classification simultaneously. To test the effectiveness of our model, we constructed a novel data set, termed the compound toxicity multitask learning (CTMTL) data set, derived from the TOXRIC data set. We compare our MMGIN model with other representative machine learning and deep learning models on the CTMTL and Tox21 data sets. The experimental results demonstrate significant advancements achieved by our MMGIN model. Furthermore, the ablation study underscores the effectiveness of the introduced fingerprints, molecular graphs, the multimodal representation learning model, and the multitask learning model, showcasing the model’s superior predictive capability and robustness. © 2024 American Chemical Society.
引用
收藏
页码:8322 / 8338
相关论文
共 50 条
  • [21] Graph representation learning via redundancy reduction
    He, Mengyao
    Zhao, Qingqing
    Zhang, Han
    Kang, Chuanze
    Li, Wei
    Han, Mingjing
    NEUROCOMPUTING, 2023, 533 : 161 - 177
  • [22] Collaborative representation learning for nodes and relations via heterogeneous graph neural network
    Li, Weimin
    Ni, Lin
    Wang, Jianjia
    Wang, Can
    KNOWLEDGE-BASED SYSTEMS, 2022, 255
  • [23] Federated Multitask Learning for Complaint Identification Using Graph Attention Network
    Singh A.
    Chandrasekar S.
    Sen T.
    Saha S.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (03): : 1277 - 1286
  • [24] Geodesic Graph Neural Network for Efficient Graph Representation Learning
    Kong, Lecheng
    Chen, Yixin
    Zhang, Muhan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [25] Multitask transfer learning with kernel representation
    Zhang, Yulu
    Ying, Shihui
    Wen, Zhijie
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (15): : 12709 - 12721
  • [26] Multitask transfer learning with kernel representation
    Yulu Zhang
    Shihui Ying
    Zhijie Wen
    Neural Computing and Applications, 2022, 34 : 12709 - 12721
  • [27] Graph-based Subtask Representation Learning via Imitation Learning
    Yoo, Se-Wook
    Seo, Seung-Woo
    2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,
  • [28] Toward Mathematical Representation of Emotion: A Deep Multitask Learning Method Based On Multimodal Recognition
    Harata, Seiichi
    Sakuma, Takuto
    Kato, Shohei
    COMPANION PUBLICATON OF THE 2020 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI '20 COMPANION), 2020, : 47 - 51
  • [29] FusionBrain: Research Project in Multimodal and Multitask Learning
    D. V. Dimitrov
    A. V. Kuznetsov
    A. A. Mal’tseva
    E. F. Goncharova
    Doklady Mathematics, 2022, 106 : S129 - S130
  • [30] FusionBrain: Research Project in Multimodal and Multitask Learning
    Dimitrov, D. V.
    Kuznetsov, A. V.
    Mal'tseva, A. A.
    Goncharova, E. F.
    DOKLADY MATHEMATICS, 2022, 106 (SUPPL 1) : S129 - S130