Energy-Motivated Equivariant Pretraining for 3D Molecular Graphs

被引:0
|
作者
Jiao, Rui [1 ,2 ]
Han, Jiaqi [1 ,2 ]
Huang, Wenbing [4 ,5 ]
Rong, Yu [6 ]
Liu, Yang [1 ,2 ,3 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Tsinghua Univ, Inst Ind Res AIR, Beijing, Peoples R China
[3] Beijing Acad Artificial Intelligence, Beijing, Peoples R China
[4] Renmin Univ China, Gaoling Sch Artificial Intelligence, Beijing, Peoples R China
[5] Beijing Key Lab Big Data Management & Anal Method, Beijing, Peoples R China
[6] Tencent AI Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pretraining molecular representation models without labels is fundamental to various applications. Conventional methods mainly process 2D molecular graphs and focus solely on 2D tasks, making their pretrained models incapable of characterizing 3D geometry and thus defective for downstream 3D tasks. In this work, we tackle 3D molecular pretraining in a complete and novel sense. In particular, we first propose to adopt an equivariant energy-based model as the backbone for pretraining, which enjoys the merits of fulfilling the symmetry of 3D space. Then we develop a node-level pretraining loss for force prediction, where we further exploit the Riemann-Gaussian distribution to ensure the loss to be E(3)-invariant, enabling more robustness. Moreover, a graph-level noise scale prediction task is also leveraged to further pro-mote the eventual performance. We evaluate our model pretrained from a large-scale 3D dataset GEOM-QM9 on two challenging 3D benchmarks: MD17 and QM9. Experimental results demonstrate the efficacy of our method against current state-of-the-art pretraining approaches, and verify the validity of our design for each proposed component. Code is available at https://github.com/jiaor17/3D-EMGP.
引用
收藏
页码:8096 / 8104
页数:9
相关论文
共 50 条
  • [21] A psychophysically motivated compression approach for 3D haptic data
    Hinterseer, Peter
    Steinbacht, Eckehard
    SYMPOSIUM ON HAPTICS INTERFACES FOR VIRTUAL ENVIRONMENT AND TELEOPERATOR SYSTEMS 2006, PROCEEDINGS, 2006, : 35 - 41
  • [22] Perception-motivated visualization for 3D city scenes
    Bin Pan
    Yong Zhao
    Xiaoming Guo
    Xiang Chen
    Wei Chen
    Qunsheng Peng
    The Visual Computer, 2013, 29 : 277 - 286
  • [23] Perception-motivated visualization for 3D city scenes
    Pan, Bin
    Zhao, Yong
    Guo, Xiaoming
    Chen, Xiang
    Chen, Wei
    Peng, Qunsheng
    VISUAL COMPUTER, 2013, 29 (04): : 277 - 286
  • [24] Tailored 3D CT contrastive pretraining to improve pulmonary pathology classification
    Aissam, Djahnine
    Alexandre, Popoff
    Emilien, Jupin-Delevaux
    Vincent, Cotin
    Olivier, Nempont
    Loic, Boussel
    2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 229 - 234
  • [25] Video Pretraining Advances 3D Deep Learning on Chest CT Tasks
    Ke, Alexander
    Huang, Shih-Cheng
    O'Conne, Chloe
    Klimont, Michal
    Yeung, Serena
    Rajpurkar, Pranav
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 227, 2023, 227 : 758 - 774
  • [26] USEEK: Unsupervised SE(3)-Equivariant 3D Keypoints for Generalizable Manipulation
    Xue, Zhengrong
    Yuan, Zhecheng
    Wang, Jiashun
    Wang, Xueqian
    Gao, Yang
    Xu, Huazhe
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 1715 - 1722
  • [27] Continuous SO(3) Equivariant Convolution for 3D Point Cloud Analysis
    Kim, Jaein
    Yoo, Hee Bin
    Han, Dong-Sig
    Song, Yeon-Ji
    Zhang, Byoung-Tak
    COMPUTER VISION - ECCV 2024, PT LII, 2025, 15110 : 59 - 75
  • [28] 3D Vision and Language Pretraining with Large-Scale Synthetic Data
    Yang, Dejie
    Xu, Zhu
    Mo, Wentao
    Chen, Qingchao
    Huang, Siyuan
    Liu, Yang
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 1552 - 1560
  • [29] MVContrast: Unsupervised Pretraining for Multi-view 3D Object Recognition
    Wang, Luequan
    Xu, Hongbin
    Kang, Wenxiong
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (06) : 872 - 883
  • [30] MVContrast: Unsupervised Pretraining for Multi-view 3D Object Recognition
    Luequan Wang
    Hongbin Xu
    Wenxiong Kang
    Machine Intelligence Research, 2023, 20 : 872 - 883