Self-attentive Rationalization for Interpretable Graph Contrastive Learning

被引:0
|
作者
Li, Sihang [1 ]
Luo, Yanchen [1 ]
Zhang, An [2 ]
Wang, Xiang [1 ]
Li, Longfei [3 ]
Zhou, Jun [3 ]
Chua, Tat-seng [2 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Natl Univ Singapore, Singapore, Singapore
[3] Ant Grp, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Self-supervised learning; interpretability; graph contrastive learning; self-attention mechanism;
D O I
10.1145/3665894
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph augmentation is the key component to reveal instance-discriminative features of a graph as its rationale- an interpretation for it-in graph contrastive learning (GCL). Existing rationale-aware augmentation mechanisms in GCL frameworks roughly fall into two categories and suffer from inherent limitations: (1) non-heuristic methods with the guidance of domain knowledge to preserve salient features, which require expensive expertise and lack generality, or (2) heuristic augmentations with a co-trained auxiliary model to identify crucial substructures, which face not only the dilemma between system complexity and transformation diversitybut also the instability stemming from the co-training of two separated sub-models. Inspired by recent studies on transformers, we propose self-attentive rationale-guided GCL (SR-GCL), which integrates rationale generator and encoder together, leverages the self-attention values in transformer module as a natural guidance to delineate semantically informative substructures from both node- and edge-wise perspectives, and contrasts on rationale-aware augmented pairs. On real-world biochemistry datasets, visualization results verify the effectiveness and interpretability of self-attentive rationalization, and the performance on downstream tasks demonstrates the state-of-the-art performance of SR-GCL for graph model pre-training. Codes are available at https://github.com/lsh0520/SR-GCL.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics
    Ng, Tiong-Sik
    Chai, Jacky Chen Long
    Low, Cheng-Yaw
    Beng Jin Teoh, Andrew
    IEEE Transactions on Information Forensics and Security, 2024, 19 : 3251 - 3264
  • [2] Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics
    Ng, Tiong-Sik
    Chai, Jacky Chen Long
    Low, Cheng-Yaw
    Teoh, Andrew Beng Jin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3251 - 3264
  • [3] Learning Dynamic Graph Embedding for Traffic Flow Forecasting: A Graph Self-Attentive Method
    Kang, Zifeng
    Xu, Hanwen
    Hu, Jianming
    Pei, Xin
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 2570 - 2576
  • [4] Graph convolutional network and self-attentive for sequential recommendation
    Guo, Kaifeng
    Zeng, Guolei
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [5] Learning Relevant Molecular Representations via Self-Attentive Graph Neural Networks
    Kikuchi, Shoma
    Takigawa, Ichigaku
    Oyama, Satoshi
    Kurihara, Masahiro
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 5364 - 5369
  • [6] Self-Attentive Pooling for Efficient Deep Learning
    Chen, Fang
    Datta, Gourav
    Kundu, Souvik
    Beerel, Peter A.
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 3963 - 3972
  • [7] Self-Attentive Associative Memory
    Le, Hung
    Tran, Truyen
    Venkatesh, Svetha
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [8] AsGCL: Attentive and Simple Graph Contrastive Learning for Recommendation
    Li, Jie
    Yang, Changchun
    APPLIED SCIENCES-BASEL, 2025, 15 (05):
  • [9] Interpretable disease prediction using heterogeneous patient records with self-attentive fusion encoder
    Kwak, Heeyoung
    Chang, Jooyoung
    Choe, Byeongjin
    Park, Sangmin
    Jung, Kyomin
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2021, 28 (10) : 2155 - 2164
  • [10] Self-Attentive Sequential Recommendation
    Kang, Wang-Cheng
    McAuley, Julian
    2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, : 197 - 206