Graph Contrastive Multi-view Learning: A Pre-training Framework for Graph Classification

被引:0
|
作者
Adjeisah M. [1 ,2 ]
Zhu X. [2 ,3 ]
Xu H. [2 ,3 ]
Ayall T.A. [4 ]
机构
[1] National Centre for Computer Animation, Bournemouth University, Pool, Bournemouth
[2] College of Computer Science and Technology, Zhejiang Normal University, Zhejiang, Jinhua
[3] Artificial Intelligence Research Institute of Beijing Geekplus Technology Co. Ltd., Beijing
[4] School of Natural and Computing Sciences & Interdisciplinary Centre for Data and AI, University of Aberdeen, Aberdeen
基金
欧盟地平线“2020”; 中国国家自然科学基金;
关键词
Contrastive learning; Graph classification; Graph neural network; Multi-view representation learning; Pre-trained embeddings;
D O I
10.1016/j.knosys.2024.112112
中图分类号
学科分类号
摘要
Recent advancements in node and graph classification tasks can be attributed to the implementation of contrastive learning and similarity search. Despite considerable progress, these approaches present challenges. The integration of similarity search introduces an additional layer of complexity to the model. At the same time, applying contrastive learning to non-transferable domains or out-of-domain datasets results in less competitive outcomes. In this work, we propose maintaining domain specificity for these tasks, which has demonstrated the potential to improve performance by eliminating the need for additional similarity searches. We adopt a fraction of domain-specific datasets for pre-training purposes, generating augmented pairs that retain structural similarity to the original graph, thereby broadening the number of views. This strategy involves a comprehensive exploration of optimal augmentations to devise multi-view embeddings. An evaluation protocol, which focuses on error minimization, accuracy enhancement, and overfitting prevention, guides this process to learn inherent, transferable structural representations that span diverse datasets. We combine pre-trained embeddings and the source graph as a beneficial input, leveraging local and global graph information to enrich downstream tasks. Furthermore, to maximize the utility of negative samples in contrastive learning, we extend the training mechanism during the pre-training stage. Our method consistently outperforms comparative baseline approaches in comprehensive experiments conducted on benchmark graph datasets of varying sizes and characteristics, establishing new state-of-the-art results. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [41] Consensus Graph Learning for Multi-View Clustering
    Li, Zhenglai
    Tang, Chang
    Liu, Xinwang
    Zheng, Xiao
    Zhang, Wei
    Zhu, En
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 2461 - 2472
  • [42] Robust Graph Learning for Multi-view Clustering
    Huang, Yixuan
    Xiao, Qingjiang
    Du, Shiqiang
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 7331 - 7336
  • [43] ROME: A Graph Contrastive Multi-View Framework From Hyperbolic Angular Space for MOOCs Recommendation
    Luo, Hao
    Husin, Nor Azura
    Aris, Teh Noranis Mohd
    IEEE ACCESS, 2023, 11 : 9691 - 9700
  • [44] Multi-view graph representation learning for hyperspectral image classification with spectral–spatial graph neural networks
    Refka Hanachi
    Akrem Sellami
    Imed Riadh Farah
    Mauro Dalla Mura
    Neural Computing and Applications, 2024, 36 : 3737 - 3759
  • [45] An Adaptive Graph Pre-training Framework for Localized Collaborative Filtering
    Wang, Yiqi
    Li, Chaozhuo
    Liu, Zheng
    Li, Mingzheng
    Tang, Jiliang
    Xie, Xing
    Chen, Lei
    Yu, Philip S.
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (02)
  • [46] Multi-view Classification Model for Knowledge Graph Completion
    Jiang, Wenbin
    Guo, Mengfei
    Chen, Yufeng
    Li, Ying
    Xu, Jinan
    Lyu, Yajuan
    Zhu, Yong
    1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), 2020, : 726 - 734
  • [47] Fast Multi-view Graph Kernels for Object Classification
    Zhang, Luming
    Song, Mingli
    Bu, Jiajun
    Chen, Chun
    AI 2011: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2011, 7106 : 570 - 579
  • [48] Multi-view Contrastive Multiple Knowledge Graph Embedding for Knowledge Completion
    Kurokawa, Mori
    Yonekawa, Kei
    Haruta, Shuichiro
    Konishi, Tatsuya
    Asoh, Hideki
    Ono, Chihiro
    Hagiwara, Masafumi
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1412 - 1418
  • [49] Measuring Diversity in Graph Learning: A Unified Framework for Structured Multi-View Clustering
    Huang, Shudong
    Tsang, Ivor W.
    Xu, Zenglin
    Lv, Jiancheng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (12) : 5869 - 5883
  • [50] Robust Diversified Graph Contrastive Network for Incomplete Multi-view Clustering
    Xue, Zhe
    Du, Junping
    Zhou, Hai
    Guan, Zhongchao
    Long, Yunfei
    Zang, Yu
    Liang, Meiyu
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3936 - 3944