Graph Contrastive Multi-view Learning: A Pre-training Framework for Graph Classification

被引:0
|
作者
Adjeisah M. [1 ,2 ]
Zhu X. [2 ,3 ]
Xu H. [2 ,3 ]
Ayall T.A. [4 ]
机构
[1] National Centre for Computer Animation, Bournemouth University, Pool, Bournemouth
[2] College of Computer Science and Technology, Zhejiang Normal University, Zhejiang, Jinhua
[3] Artificial Intelligence Research Institute of Beijing Geekplus Technology Co. Ltd., Beijing
[4] School of Natural and Computing Sciences & Interdisciplinary Centre for Data and AI, University of Aberdeen, Aberdeen
基金
欧盟地平线“2020”; 中国国家自然科学基金;
关键词
Contrastive learning; Graph classification; Graph neural network; Multi-view representation learning; Pre-trained embeddings;
D O I
10.1016/j.knosys.2024.112112
中图分类号
学科分类号
摘要
Recent advancements in node and graph classification tasks can be attributed to the implementation of contrastive learning and similarity search. Despite considerable progress, these approaches present challenges. The integration of similarity search introduces an additional layer of complexity to the model. At the same time, applying contrastive learning to non-transferable domains or out-of-domain datasets results in less competitive outcomes. In this work, we propose maintaining domain specificity for these tasks, which has demonstrated the potential to improve performance by eliminating the need for additional similarity searches. We adopt a fraction of domain-specific datasets for pre-training purposes, generating augmented pairs that retain structural similarity to the original graph, thereby broadening the number of views. This strategy involves a comprehensive exploration of optimal augmentations to devise multi-view embeddings. An evaluation protocol, which focuses on error minimization, accuracy enhancement, and overfitting prevention, guides this process to learn inherent, transferable structural representations that span diverse datasets. We combine pre-trained embeddings and the source graph as a beneficial input, leveraging local and global graph information to enrich downstream tasks. Furthermore, to maximize the utility of negative samples in contrastive learning, we extend the training mechanism during the pre-training stage. Our method consistently outperforms comparative baseline approaches in comprehensive experiments conducted on benchmark graph datasets of varying sizes and characteristics, establishing new state-of-the-art results. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [31] Contrastive Multi-View Composite Graph Convolutional Networks Based on Contribution Learning for Autism Spectrum Disorder Classification
    Zhu, Hao
    Wang, Jun
    Zhao, Yin-Ping
    Lu, Minhua
    Shi, Jun
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2023, 70 (06) : 1943 - 1954
  • [32] Learning from Feature and Global Topologies: Adaptive Multi-View Parallel Graph Contrastive Learning
    Song, Yumeng
    Li, Xiaohua
    Li, Fangfang
    Yu, Ge
    MATHEMATICS, 2024, 12 (14)
  • [33] MD-GCCF: Multi-view deep graph contrastive learning for collaborative filtering
    Li, Xinlu
    Tian, Yujie
    Dong, Bingbing
    Ji, Shengwei
    NEUROCOMPUTING, 2024, 590
  • [34] A multi-view mask contrastive learning graph convolutional neural network for age estimation
    Zhang, Yiping
    Shou, Yuntao
    Meng, Tao
    Ai, Wei
    Li, Keqin
    KNOWLEDGE AND INFORMATION SYSTEMS, 2024, 66 (11) : 7137 - 7162
  • [35] Joint learning of data recovering and graph contrastive denoising for incomplete multi-view clustering
    Wang, Haiyue
    Wang, Quan
    Miao, Qiguang
    Ma, Xiaoke
    INFORMATION FUSION, 2024, 104
  • [36] Multi-view Graph Contrastive Representation Learning for Drug-Drug Interaction Prediction
    Wang, Yingheng
    Min, Yaosen
    Chen, Xin
    Wu, Ji
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 2921 - 2933
  • [37] Essential multi-view graph learning for clustering
    Shuangxun Ma
    Qinghai Zheng
    Yuehu Liu
    Journal of Ambient Intelligence and Humanized Computing, 2022, 13 : 5225 - 5236
  • [38] Efficient Graph Based Multi-view Learning
    Hu, Hengtong
    Hong, Richang
    Fu, Weijie
    Wang, Meng
    MULTIMEDIA MODELING (MMM 2019), PT I, 2019, 11295 : 691 - 703
  • [39] Essential multi-view graph learning for clustering
    Ma, Shuangxun
    Zheng, Qinghai
    Liu, Yuehu
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 13 (11) : 5225 - 5236
  • [40] Multi-view projected clustering with graph learning
    Gao, Quanxue
    Wan, Zhizhen
    Liang, Ying
    Wang, Qianqian
    Liu, Yang
    Shao, Ling
    NEURAL NETWORKS, 2020, 126 (126) : 335 - 346