Self-attentive Rationalization for Interpretable Graph Contrastive Learning

被引:0
|
作者
Li, Sihang [1 ]
Luo, Yanchen [1 ]
Zhang, An [2 ]
Wang, Xiang [1 ]
Li, Longfei [3 ]
Zhou, Jun [3 ]
Chua, Tat-seng [2 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Natl Univ Singapore, Singapore, Singapore
[3] Ant Grp, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Self-supervised learning; interpretability; graph contrastive learning; self-attention mechanism;
D O I
10.1145/3665894
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph augmentation is the key component to reveal instance-discriminative features of a graph as its rationale- an interpretation for it-in graph contrastive learning (GCL). Existing rationale-aware augmentation mechanisms in GCL frameworks roughly fall into two categories and suffer from inherent limitations: (1) non-heuristic methods with the guidance of domain knowledge to preserve salient features, which require expensive expertise and lack generality, or (2) heuristic augmentations with a co-trained auxiliary model to identify crucial substructures, which face not only the dilemma between system complexity and transformation diversitybut also the instability stemming from the co-training of two separated sub-models. Inspired by recent studies on transformers, we propose self-attentive rationale-guided GCL (SR-GCL), which integrates rationale generator and encoder together, leverages the self-attention values in transformer module as a natural guidance to delineate semantically informative substructures from both node- and edge-wise perspectives, and contrasts on rationale-aware augmented pairs. On real-world biochemistry datasets, visualization results verify the effectiveness and interpretability of self-attentive rationalization, and the performance on downstream tasks demonstrates the state-of-the-art performance of SR-GCL for graph model pre-training. Codes are available at https://github.com/lsh0520/SR-GCL.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks
    Song, Weiping
    Shi, Chence
    Xiao, Zhiping
    Duan, Zhijian
    Xu, Yewen
    Zhang, Ming
    Tang, Jian
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 1161 - 1170
  • [32] Self-Attentive Subset Learning over a Set-Based Preference in Recommendation
    Liu, Kunjia
    Chen, Yifan
    Tang, Jiuyang
    Huang, Hongbin
    Liu, Lihua
    APPLIED SCIENCES-BASEL, 2023, 13 (03):
  • [33] SAIN: Self-Attentive Integration Network for Recommendation
    Yun, Seoungjun
    Kim, Raehyun
    Ko, Miyoung
    Kang, Jaewoo
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 1205 - 1208
  • [34] Self-Attentive Graph Convolution Network With Latent Group Mining and Collaborative Filtering for Personalized Recommendation
    Liu, Shenghao
    Wang, Bang
    Deng, Xianjun
    Yang, Laurence T.
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (05): : 3212 - 3221
  • [35] A hierarchical self-attentive neural extractive summarizer via reinforcement learning (HSASRL)
    Farida Mohsen
    Jiayang Wang
    Kamal Al-Sabahi
    Applied Intelligence, 2020, 50 : 2633 - 2646
  • [36] An Interpretable Brain Graph Contrastive Learning Framework for Brain Disorder Analysis
    Luo, Xuexiong
    Dong, Guangwei
    Wu, Jia
    Beheshti, Amin
    Yang, Jian
    Xue, Shan
    PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1074 - 1077
  • [37] Causal invariance guides interpretable graph contrastive learning in fMRI analysis
    Wei, Boyang
    Zeng, Weiming
    Shi, Yuhu
    Zhang, Hua
    ALEXANDRIA ENGINEERING JOURNAL, 2025, 117
  • [38] Skeleton-Based Human Action Recognition with Adaptive and Self-Attentive Graph Convolution Networks
    Shahid, Ali Raza
    Yan, Hong
    SSRN, 2023,
  • [39] Multivariate Sleep Stage Classification using Hybrid Self-Attentive Deep Learning Networks
    Yuan, Ye
    Jia, Kebin
    Ma, Fenglong
    Xun, Guangxu
    Wang, Yaqing
    Su, Lu
    Zhang, Aidong
    PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 963 - 968
  • [40] Learning Granger Causality from Instance-wise Self-attentive Hawkes Processes
    Wu, Dongxia
    Ide, Tsuyoshi
    Lozano, Aurelie
    Kollias, Georgios
    Navratil, Jiri
    Abe, Naoki
    Ma, Yi-An
    Yu, Rose
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238