Boosting Adaptive Graph Augmented MLPs via Customized Knowledge Distillation

被引:0
|
作者
Wei, Shaowei [1 ]
Wu, Zhengwei [1 ]
Zhang, Zhiqiang [1 ]
Zhou, Jun [1 ]
机构
[1] Ant Grp, Hangzhou, Peoples R China
关键词
Graph learning; Inference acceleration; Knowledge distillation; Inductive learning;
D O I
10.1007/978-3-031-43418-1_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While Graph Neural Networks (GNNs) have shown convinced performance on handling non-Euclidean network data, the high inference latency caused by message-passing mechanism hinders their deployment on real-time scenarios. One emerging inference acceleration approach is to distill knowledge derived from teacher GNNs into message-passing-free student multi-layer perceptrons (MLPs). Nevertheless, due to the graph heterophily causing performance degradation of teacher GNNs, as well as the unsatisfactory generalization ability of student MLPs on graph data, GNN-MLP like designs often achieve inferior performance. To tackle this challenge, we propose boosting adaptive GRaph Augmented MLPs via Customized knowlEdge Distillation (GRACED), a novel approach to learn graph knowledge effectively and efficiently. Specifically, we first design a novel customized knowledge distillation strategy to modify the guided knowledge to mitigate the adverse influence of heterophily to student MLPs. Then, we introduce an adaptive graph propagation approach to precompute aggregation feature for node considering both of homophily and heterophily to boost the student MLPs for learning graph information. Furthermore, we design an aggregation feature approximation technique for inductive scenarios. Extensive experiments on node classification task and theoretical analyses demonstrate the superiority of GRACED by comparing with the state-of-the-art methods under both transductive and inductive settings across homophilic and heterophilic datasets.
引用
收藏
页码:87 / 103
页数:17
相关论文
共 50 条
  • [31] Boosting lesion annotation via aggregating explicit relations in external medical knowledge graph
    Chen, Yixin
    Zhao, Xianbing
    Tang, Buzhou
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2022, 132
  • [32] A type-augmented knowledge graph embedding framework for knowledge graph completion
    Peng He
    Gang Zhou
    Yao Yao
    Zhe Wang
    Hao Yang
    [J]. Scientific Reports, 13 (1)
  • [33] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation
    Qin, Can
    Zhao, Handong
    Wang, Lichen
    Wang, Huan
    Zhang, Yulun
    Fu, Yun
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [34] The Devil is in the Data: Learning Fair Graph Neural Networks via Partial Knowledge Distillation
    Zhu, Yuchang
    Li, Jintang
    Chen, Liang
    Zheng, Zibin
    [J]. PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1012 - 1021
  • [35] Lightweight Infrared and Visible Image Fusion via Adaptive DenseNet with Knowledge Distillation
    Zhao, Zongqing
    Su, Shaojing
    Wei, Junyu
    Tong, Xiaozhong
    Gao, Weijia
    [J]. ELECTRONICS, 2023, 12 (13)
  • [36] Adaptive Boosting Graph Convolution Network for Recommendation
    Guo, Tong
    Wu, Bin
    Zhang, Xue
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING (ICICSE 2021), 2021, : 122 - 125
  • [37] Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems
    Gad, Gad
    Fadlullah, Zubair
    [J]. SENSORS, 2023, 23 (01)
  • [38] On Representation Knowledge Distillation for Graph Neural Networks
    Joshi, Chaitanya K.
    Liu, Fayao
    Xun, Xu
    Lin, Jie
    Foo, Chuan Sheng
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) : 4656 - 4667
  • [39] IterDE: An Iterative Knowledge Distillation Framework for Knowledge Graph Embeddings
    Liu, Jiajun
    Wang, Peng
    Shang, Ziyu
    Wu, Chenxiao
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4488 - 4496
  • [40] ADAPTIVE KNOWLEDGE DISTILLATION BASED ON ENTROPY
    Kwon, Kisoo
    Na, Hwidong
    Lee, Hoshik
    Kim, Nam Soo
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7409 - 7413