Iterative Graph Self-Distillation

被引:2
|
作者
Zhang, Hanlin [1 ]
Lin, Shuai [2 ]
Liu, Weiyang [3 ]
Zhou, Pan [4 ]
Tang, Jian [5 ]
Liang, Xiaodan [2 ]
Xing, Eric P. [6 ]
机构
[1] Carnegie Mellon Univ, Machine Learning Dept, Pittsburgh, PA 15213 USA
[2] Sun Yat Sen Univ, Sch Intelligent Syst Engn, Guangzhou 510275, Guangdong, Peoples R China
[3] Univ Cambridge, Dept Comp Sci, Cambridge CB2 1TN, England
[4] SEA Grp Ltd, SEA AI Lab, Singapore 138680, Singapore
[5] HEC Montreal, Montreal, PQ H3T 2A7, Canada
[6] Carnegie Mellon Univ, Dept Comp Sci, Pittsburgh, PA 15213 USA
关键词
Task analysis; Representation learning; Kernel; Graph neural networks; Iterative methods; Data augmentation; Training; graph representation learning; self-supervised learning;
D O I
10.1109/TKDE.2023.3303885
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that fine-tuning the IGSD-trained models with self-training can further improve graph representation learning. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.
引用
收藏
页码:1161 / 1169
页数:9
相关论文
共 50 条
  • [1] Reverse Self-Distillation Overcoming the Self-Distillation Barrier
    Ni, Shuiping
    Ma, Xinliang
    Zhu, Mingfu
    Li, Xingwang
    Zhang, Yu-Dong
    IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY, 2023, 4 : 195 - 205
  • [2] A Teacher-Free Graph Knowledge Distillation Framework With Dual Self-Distillation
    Wu, Lirong
    Lin, Haitao
    Gao, Zhangyang
    Zhao, Guojiang
    Li, Stan Z.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (09) : 4375 - 4385
  • [3] Unbiased scene graph generation using the self-distillation method
    Bo Sun
    Zhuo Hao
    Lejun Yu
    Jun He
    The Visual Computer, 2024, 40 : 2381 - 2390
  • [4] Unbiased scene graph generation using the self-distillation method
    Sun, Bo
    Hao, Zhuo
    Yu, Lejun
    He, Jun
    VISUAL COMPUTER, 2024, 40 (04): : 2381 - 2390
  • [5] Self-Supervised Spatiotemporal Graph Neural Networks With Self-Distillation for Traffic Prediction
    Ji, Junzhong
    Yu, Fan
    Lei, Minglong
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (02) : 1580 - 1593
  • [6] Probabilistic online self-distillation
    Tzelepi, Maria
    Passalis, Nikolaos
    Tefas, Anastasios
    NEUROCOMPUTING, 2022, 493 : 592 - 604
  • [7] GRACE: Graph Self-Distillation and Completion to Mitigate Degree-Related Biases
    Xu, Hui
    Xiang, Liyao
    Huang, Femke
    Weng, Yuting
    Xu, Ruijie
    Wang, Xinbing
    Zhou, Chenghu
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2813 - 2824
  • [8] A Lightweight Graph Neural Network Algorithm for Action Recognition Based on Self-Distillation
    Feng, Miao
    Meunier, Jean
    ALGORITHMS, 2023, 16 (12)
  • [9] Bayesian Optimization Meets Self-Distillation
    Lee, HyunJae
    Song, Heon
    Lee, Hyeonsoo
    Lee, Gi-hyeon
    Park, Suyeong
    Yoo, Donggeun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1696 - 1705
  • [10] Restructuring the Teacher and Student in Self-Distillation
    Zheng, Yujie
    Wang, Chong
    Tao, Chenchen
    Lin, Sunqi
    Qian, Jiangbo
    Wu, Jiafei
    IEEE Transactions on Image Processing, 2024, 33 : 5551 - 5563