Boosting Adaptive Graph Augmented MLPs via Customized Knowledge Distillation

被引:0
|
作者
Wei, Shaowei [1 ]
Wu, Zhengwei [1 ]
Zhang, Zhiqiang [1 ]
Zhou, Jun [1 ]
机构
[1] Ant Grp, Hangzhou, Peoples R China
关键词
Graph learning; Inference acceleration; Knowledge distillation; Inductive learning;
D O I
10.1007/978-3-031-43418-1_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While Graph Neural Networks (GNNs) have shown convinced performance on handling non-Euclidean network data, the high inference latency caused by message-passing mechanism hinders their deployment on real-time scenarios. One emerging inference acceleration approach is to distill knowledge derived from teacher GNNs into message-passing-free student multi-layer perceptrons (MLPs). Nevertheless, due to the graph heterophily causing performance degradation of teacher GNNs, as well as the unsatisfactory generalization ability of student MLPs on graph data, GNN-MLP like designs often achieve inferior performance. To tackle this challenge, we propose boosting adaptive GRaph Augmented MLPs via Customized knowlEdge Distillation (GRACED), a novel approach to learn graph knowledge effectively and efficiently. Specifically, we first design a novel customized knowledge distillation strategy to modify the guided knowledge to mitigate the adverse influence of heterophily to student MLPs. Then, we introduce an adaptive graph propagation approach to precompute aggregation feature for node considering both of homophily and heterophily to boost the student MLPs for learning graph information. Furthermore, we design an aggregation feature approximation technique for inductive scenarios. Extensive experiments on node classification task and theoretical analyses demonstrate the superiority of GRACED by comparing with the state-of-the-art methods under both transductive and inductive settings across homophilic and heterophilic datasets.
引用
收藏
页码:87 / 103
页数:17
相关论文
共 50 条
  • [1] Boosting Graph Neural Networks via Adaptive Knowledge Distillation
    Guo, Zhichun
    Zhang, Chunhui
    Fan, Yujie
    Tian, Yijun
    Zhang, Chuxu
    Chawla, Nitesh V.
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7793 - 7801
  • [2] Knowledge distillation via adaptive meta-learning for graph neural network
    Shen, Tiesunlong
    Wang, Jin
    Zhang, Xuejie
    [J]. INFORMATION SCIENCES, 2025, 689
  • [3] Decoupled graph knowledge distillation: A general logits-based method for learning MLPs on graphs
    Tian, Yingjie
    Xu, Shaokai
    Li, Muyang
    [J]. NEURAL NETWORKS, 2024, 179
  • [4] Learning structure perception MLPs on graphs: a layer-wise graph knowledge distillation framework
    Du, Hangyuan
    Yu, Rong
    Bai, Liang
    Bai, Lu
    Wang, Wenjian
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (10) : 4357 - 4372
  • [5] Boosting LightWeight Depth Estimation via Knowledge Distillation
    Hu, Junjie
    Fan, Chenyou
    Jiang, Hualie
    Guo, Xiyue
    Gao, Yuan
    Lu, Xiangyong
    Lam, Tin Lun
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2023, 2023, 14117 : 27 - 39
  • [6] Knowledge Distillation via Instance Relationship Graph
    Liu, Yufan
    Cao, Jiajiong
    Li, Bing
    Yuan, Chunfeng
    Hu, Weiming
    Li, Yangxi
    Duan, Yunqiang
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7099 - 7107
  • [7] Boosting Graph Contrastive Learning via Adaptive Sampling
    Wan, Sheng
    Zhan, Yibing
    Chen, Shuo
    Pan, Shirui
    Yang, Jian
    Tao, Dacheng
    Gong, Chen
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 13
  • [8] Improving Knowledge Distillation With a Customized Teacher
    Tan, Chao
    Liu, Jie
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2290 - 2299
  • [9] Augmented Assembly Work Instruction Knowledge Graph for Adaptive Presentation
    Li, Wang
    Wang, Junfeng
    Jiao, Sichen
    Liu, Maoding
    [J]. INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2021, PT I, 2021, 13013 : 793 - 803
  • [10] Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation
    Huang, Bo
    Chen, Mingyang
    Wang, Yi
    Lu, Junda
    Cheng, Minhao
    Wang, Wei
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24668 - 24677