PHGNN: Pre-Training Heterogeneous Graph Neural Networks

被引:0
|
作者
Li, Xin [1 ]
Wei, Hao [2 ]
Ding, Yu [3 ]
机构
[1] Ningbo Univ Finance & Econ, Coll Finance & Informat, Ningbo 315175, Peoples R China
[2] Natl Key Lab Sci & Technol Bind Signal Proc, Chengdu 610000, Peoples R China
[3] Nanjing Agr Univ, Coll Artificial Intelligence, Nanjing 210095, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Task analysis; Graph neural networks; Semantics; Training; Feature extraction; Aggregates; Vectors; heterogeneous graph; pre-training;
D O I
10.1109/ACCESS.2024.3409429
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural networks(GNNs) in heterogeneous graph has shown superior performance and attracted considerable research interest. However, many applications require GNNs to make predictions on test examples that are distributionally different from training ones, while task-specific labeled data is often arduously expensive to obtain. An effective approach to this challenge is to pre-train an expressive GNN model on unlabeled data, and then fine-tune it on a downstream task of interest. While pre-training has been demonstrated effectively in homogeneous graph, it remains an open question to pre-train a GNN in heterogeneous graph as it contains different types of nodes and edges, which leads to new challenges on structure heterogeneity for graph pre-training. To capture the structural and semantic properties of heterogeneous graphs simultaneously, in this paper, we develop a new strategy for Pre-training Heterogeneous Graph Neural Networks(PHGNN). The key to the success of PHGNN is that PHGNN innovatively proposed to use two different tasks to capture two kinds of similarities in heterogeneous graph: the similarities between nodes with the same type and the similarities between nodes with the different type. In addition, PHGNN proposed an attribute type prediction task to preserve node attributes information. We systematically study pre-training on two real-world heterogeneous graphs. The results demonstrate that PHGNN improves generalization significantly across downstream tasks.
引用
收藏
页码:135411 / 135418
页数:8
相关论文
共 50 条
  • [1] Neural Graph Matching for Pre-training Graph Neural Networks
    Hou, Yupeng
    Hu, Binbin
    Zhao, Wayne Xin
    Zhang, Zhiqiang
    Zhou, Jun
    Wen, Ji-Rong
    PROCEEDINGS OF THE 2022 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2022, : 172 - 180
  • [2] Pre-training on dynamic graph neural networks
    Chen, Ke-Jia
    Zhang, Jiajun
    Jiang, Linpu
    Wang, Yunyun
    Dai, Yuxuan
    NEUROCOMPUTING, 2022, 500 : 679 - 687
  • [3] Pre-training graph neural networks for link prediction in biomedical networks
    Long, Yahui
    Wu, Min
    Liu, Yong
    Fang, Yuan
    Kwoh, Chee Keong
    Chen, Jinmiao
    Luo, Jiawei
    Li, Xiaoli
    BIOINFORMATICS, 2022, 38 (08) : 2254 - 2262
  • [4] GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural Networks
    Sun, Mingchen
    Zhou, Kaixiong
    He, Xin
    Wang, Ying
    Wang, Xin
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 1717 - 1727
  • [5] PSP: Pre-training and Structure Prompt Tuning for Graph Neural Networks
    Ge, Qingqing
    Zhao, Zeyuan
    Liu, Yiding
    Cheng, Anfeng
    Li, Xiang
    Wang, Shuaiqiang
    Yin, Dawei
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT V, ECML PKDD 2024, 2024, 14945 : 423 - 439
  • [6] Neighborhood-enhanced contrast for pre-training graph neural networks
    Yichun Li
    Jin Huang
    Weihao Yu
    Tinghua Zhang
    Neural Computing and Applications, 2024, 36 : 4195 - 4205
  • [7] GPT-GNN: Generative Pre-Training of Graph Neural Networks
    Hu, Ziniu
    Dong, Yuxiao
    Wang, Kuansan
    Chang, Kai-Wei
    Sun, Yizhou
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1857 - 1867
  • [8] Neighborhood-enhanced contrast for pre-training graph neural networks
    Li, Yichun
    Huang, Jin
    Yu, Weihao
    Zhang, Tinghua
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (08): : 4195 - 4205
  • [9] Pre-training on Large-Scale Heterogeneous Graph
    Jiang, Xunqiang
    Jia, Tianrui
    Fang, Yuan
    Shi, Chuan
    Lin, Zhe
    Wang, Hui
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 756 - 766
  • [10] Train Once and Explain Everywhere: Pre-training Interpretable Graph Neural Networks
    Yin, Jun
    Li, Chaozhuo
    Yan, Hao
    Lian, Jianxun
    Wang, Senzhang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,