Cognize Yourself: Graph Pre-Training via Core Graph Cognizing and Differentiating

被引:0
|
作者
Yu, Tao [1 ]
Fu, Yao [1 ]
Hu, Linghui [1 ]
Wang, Huizhao [1 ]
Jiang, Weihao [1 ]
Pu, Shiliang [1 ]
机构
[1] Hikvis Res Inst, Hangzhou, Peoples R China
关键词
GNN Pre-training; Graph Transfer Learning; Graph Neural Networks; Graph Representation Learning;
D O I
10.1145/3511808.3557259
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While Graph Neural Networks (GNNs) have become de facto criterion in graph representation learning, they still suffer from label scarcity and poor generalization. To alleviate these issues, graph pre-training has been proposed to learn universal patterns from unlabeled data via applying self-supervised tasks. Most existing graph pre-training methods only use a single self-supervised task, which will lead to insufficient knowledge mining. Recently, there are also some works that try to use multiple self-supervised tasks, however, we argue that these methods still suffer from a serious problem, which we call it graph structure impairment. That is, there actually exists structural gaps among several tasks due to the divergence of optimization objectives, which means customized graph structures should be provided for different self-supervised tasks. Graph structure impairment not only significantly hurts the generalizability of pre-trained GNNs, but also leads to suboptimal solution, and there is no study so far to address it well. Motivated by Meta-Cognitive theory, we propose a novel model named Core Graph Cognizing and Differentiating (CORE) to deal with the problem in an effective approach. Specifically, CORE consists of cognizing network and differentiating process, the former cognizes a core graph which stands for the essential structure of the graph, and the latter allows it to differentiate into several task-specific graphs for different tasks. Besides, this is also the first study to combine graph pre-training with cognitive theory to build a cognition-aware model. Several experiments have been conducted to demonstrate the effectiveness of CORE.
引用
收藏
页码:2413 / 2422
页数:10
相关论文
共 50 条
  • [1] Dynamic Scene Graph Generation via Anticipatory Pre-training
    Li, Yiming
    Yang, Xiaoshan
    Xu, Changsheng
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13864 - 13873
  • [2] GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
    Qiu, Jiezhong
    Chen, Qibin
    Dong, Yuxiao
    Zhang, Jing
    Yang, Hongxia
    Ding, Ming
    Wang, Kuansan
    Tang, Jie
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1150 - 1160
  • [3] Dictionary Temporal Graph Network via Pre-training Embedding Distillation
    Liu, Yipeng
    Zheng, Fang
    [J]. ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14880 : 336 - 347
  • [4] Graph Pre-training for AMR Parsing and Generation
    Bai, Xuefeng
    Chen, Yulong
    Zhang, Yue
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 6001 - 6015
  • [5] Pre-training on dynamic graph neural networks
    Chen, Ke-Jia
    Zhang, Jiajun
    Jiang, Linpu
    Wang, Yunyun
    Dai, Yuxuan
    [J]. NEUROCOMPUTING, 2022, 500 : 679 - 687
  • [6] Graph Strength for Identification of Pre-training Desynchronization
    Zapata Castano, Frank Yesid
    Gomez Morales, Oscar Wladimir
    Alvarez Meza, Andres Marino
    Castellanos Dominguez, Cesar German
    [J]. INTELLIGENT TECHNOLOGIES: DESIGN AND APPLICATIONS FOR SOCIETY, CITIS 2022, 2023, 607 : 36 - 44
  • [7] Graph Structure Enhanced Pre-Training Language Model for Knowledge Graph Completion
    Zhu, Huashi
    Xu, Dexuan
    Huang, Yu
    Jin, Zhi
    Ding, Weiping
    Tong, Jiahui
    Chong, Guoshuang
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (04): : 2697 - 2708
  • [8] GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural Networks
    Sun, Mingchen
    Zhou, Kaixiong
    He, Xin
    Wang, Ying
    Wang, Xin
    [J]. PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 1717 - 1727
  • [9] Pre-training on Large-Scale Heterogeneous Graph
    Jiang, Xunqiang
    Jia, Tianrui
    Fang, Yuan
    Shi, Chuan
    Lin, Zhe
    Wang, Hui
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 756 - 766
  • [10] Pre-training of Graph Augmented Transformers for Medication Recommendation
    Shang, Junyuan
    Ma, Tengfei
    Xiao, Cao
    Sun, Jimeng
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5953 - 5959