MoleMCL: a multi-level contrastive learning framework for molecular pre-training

被引:1
|
作者
Zhang, Xinyi [1 ]
Xu, Yanni [1 ]
Jiang, Changzhi [1 ]
Shen, Lian [1 ]
Liu, Xiangrong [1 ,2 ]
机构
[1] Xiamen Univ, Dept Comp Sci & Technol, Xiamen 361005, Peoples R China
[2] Xiamen Univ, Natl Inst Data Sci Hlth & Med, Xiamen 361005, Peoples R China
基金
中国国家自然科学基金;
关键词
DISCOVERY;
D O I
10.1093/bioinformatics/btae164
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation Molecular representation learning plays an indispensable role in crucial tasks such as property prediction and drug design. Despite the notable achievements of molecular pre-training models, current methods often fail to capture both the structural and feature semantics of molecular graphs. Moreover, while graph contrastive learning has unveiled new prospects, existing augmentation techniques often struggle to retain their core semantics. To overcome these limitations, we propose a gradient-compensated encoder parameter perturbation approach, ensuring efficient and stable feature augmentation. By merging enhancement strategies grounded in attribute masking and parameter perturbation, we introduce MoleMCL, a new MOLEcular pre-training model based on multi-level contrastive learning.Results Experimental results demonstrate that MoleMCL adeptly dissects the structure and feature semantics of molecular graphs, surpassing current state-of-the-art models in molecular prediction tasks, paving a novel avenue for molecular modeling.Availability and implementation The code and data underlying this work are available in GitHub at https://github.com/BioSequenceAnalysis/MoleMCL.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] A Multi-view Molecular Pre-training with Generative Contrastive Learning
    Liu, Yunwu
    Zhang, Ruisheng
    Yuan, Yongna
    Ma, Jun
    Li, Tongfeng
    Yu, Zhixuan
    [J]. INTERDISCIPLINARY SCIENCES-COMPUTATIONAL LIFE SCIENCES, 2024, 16 (03) : 741 - 754
  • [2] Contrastive Pre-training with Multi-level Alignment for Grounded Multimodal Named Entity Recognition
    Bao, Xigang
    Tian, Mengyuan
    Wang, Luyao
    Zha, Zhiyuan
    Qin, Biao
    [J]. PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 795 - 803
  • [3] Graph Contrastive Multi-view Learning: A Pre-training Framework for Graph Classification
    Adjeisah, Michael
    Zhu, Xinzhong
    Xu, Huiying
    Ayall, Tewodros Alemu
    [J]. Knowledge-Based Systems, 2024, 299
  • [4] Multilingual Molecular Representation Learning via Contrastive Pre-training
    Guo, Zhihui
    Sharma, Pramod
    Martinez, Andy
    Du, Liang
    Abraham, Robin
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3441 - 3453
  • [5] Multi-level Contrastive Learning Framework for Sequential Recommendation
    Wang, Ziyang
    Liu, Huoyu
    Wei, Wei
    Hu, Yue
    Mao, Xian-Ling
    He, Shaojian
    Fang, Rui
    Chen, Dangyang
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2098 - 2107
  • [6] Robust Pre-Training by Adversarial Contrastive Learning
    Jiang, Ziyu
    Chen, Tianlong
    Chen, Ting
    Wang, Zhangyang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [7] Multi-Modal Contrastive Pre-training for Recommendation
    Liu, Zhuang
    Ma, Yunpu
    Schubert, Matthias
    Ouyang, Yuanxin
    Xiong, Zhang
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 99 - 108
  • [8] New Intent Discovery with Pre-training and Contrastive Learning
    Zhang, Yuwei
    Zhang, Haode
    Zhan, Li-Ming
    Wu, Xiao-Ming
    Lam, Albert Y. S.
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 256 - 269
  • [9] Image Difference Captioning with Pre-training and Contrastive Learning
    Yao, Linli
    Wang, Weiying
    Jin, Qin
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3108 - 3116
  • [10] Bridge Pre-Training and Clustering: A Unified Contrastive Learning Framework for OOD Intent Discovery
    Mou, Yutao
    Xu, Heyang
    [J]. IEEE ACCESS, 2023, 11 : 63714 - 63724