Knowledge distillation via adaptive meta-learning for graph neural network

被引:0
|
作者
Shen, Tiesunlong [1 ]
Wang, Jin [1 ]
Zhang, Xuejie [1 ]
机构
[1] Yunnan Univ, Sch Informat Sci & Engn, Kunming, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural networks; Knowledge distillation; Meta-learning; Representation learning;
D O I
10.1016/j.ins.2024.121505
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the ever-increasing scale of graph-structured data, the computational resource requirements of large-scale graph neural networks (GNNs) can impede their deployment on devices with limited resources. Through knowledge distillation (KD), the expertise of an expansive, previously trained model (the teacher) can be transferred to a smaller architecture (the student) while maintaining comparable performance. However, existing KD approaches for GNNs often fix the teacher model during student learning. Without considering the students' learning feedback, there is a performance reduction after KD is achieved. With this objective, this research proposed an expertise transfer method via adaptive meta-learning for GNNs. The teacher can continuously update its parameters according to the student's optimal gradient direction in each KD step. Thus, the teacher learns to teach appropriate knowledge to the student. To maintain the structural features of each node and further avoid over-smoothing, we also introduced local structure preservation loss. Comprehensive experiments across four benchmarks demonstrate the effectiveness of our methodology.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
    Zhang, Hailin
    Chen, Defang
    Wang, Can
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1943 - 1948
  • [2] Fast Network Alignment via Graph Meta-Learning
    Zhou, Fan
    Cao, Chengtai
    Trajcevski, Goce
    Zhang, Kunpeng
    Zhong, Ting
    Geng, Ji
    [J]. IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2020, : 686 - 695
  • [3] Boosting Graph Neural Networks via Adaptive Knowledge Distillation
    Guo, Zhichun
    Zhang, Chunhui
    Fan, Yujie
    Tian, Yijun
    Zhang, Chuxu
    Chawla, Nitesh V.
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7793 - 7801
  • [4] Enhanced Scalable Graph Neural Network via Knowledge Distillation
    Mai, Chengyuan
    Chang, Yaomin
    Chen, Chuan
    Zheng, Zibin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 14
  • [5] Knowledge Distillation for Model-Agnostic Meta-Learning
    Zhang, Min
    Wang, Donglin
    Gai, Sibo
    [J]. ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1355 - 1362
  • [6] Decoupled knowledge distillation method based on meta-learning
    Du, Wenqing
    Geng, Liting
    Liu, Jianxiong
    Zhao, Zhigang
    Wang, Chunxiao
    Huo, Jidong
    [J]. HIGH-CONFIDENCE COMPUTING, 2024, 4 (01):
  • [7] TAdaNet: Task-Adaptive Network for Graph-Enriched Meta-Learning
    Suo, Qiuling
    Chou, Jingyuan
    Zhong, Weida
    Zhang, Aidong
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1789 - 1799
  • [8] Dynamic Graph Embedding via Meta-Learning
    Mao, Yuren
    Hao, Yu
    Cao, Xin
    Fang, Yixiang
    Lin, Xuemin
    Mao, Hua
    Xu, Zhiqiang
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (07) : 2967 - 2979
  • [9] Boosting Meta-Learning Cold-Start Recommendation with Graph Neural Network
    Liu, Han
    Lin, Hongxiang
    Zhang, Xiaotong
    Ma, Fenglong
    Chen, Hongyang
    Wang, Lei
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4105 - 4109
  • [10] Locally-adaptive mapping for network alignment via meta-learning
    Long, Meixiu
    Chen, Siyuan
    Wang, Jiahai
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (05)