A multi-graph neural group recommendation model with meta-learning and multi-teacher distillation

被引:3
|
作者
Zhou, Weizhen [1 ]
Huang, Zhenhua [1 ,2 ]
Wang, Cheng [3 ]
Chen, Yunwen [4 ]
机构
[1] South China Normal Univ, Sch Artificial Intelligence, Foshan 528225, Peoples R China
[2] South China Normal Univ, Sch Comp Sci, Guangzhou 510631, Peoples R China
[3] Tongji Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
[4] DataGrand Inc, 112 Liangxiu Rd, Shanghai 201203, Peoples R China
基金
中国国家自然科学基金;
关键词
Group recommendation; Graph auto-encoder; Meta-learning; Knowledge distilling; Deep learning;
D O I
10.1016/j.knosys.2023.110731
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Group recommendation has garnered significant attention recently, aiming to suggest items of interest to groups. Most deep learning-based approaches for group recommendation focus on learning group representation through group-user interactions or by aggregating user preferences. However, these approaches face challenges due to the complexity of group representation learning and the limited availability of group-item interactions. To address these difficulties, we propose a multi-graph neural group recommendation model with meta-learning and multi-teacher distillation. Our model consists of three stages: multiple graphs representation learning (MGRL), meta-learning-based knowledge transfer (MLKT), and multi-teacher distillation (MTD). In the MGRL stage, we construct two bipartite graphs and an undirected weighted graph based on group-user interactions, group-item interactions, and similarity between groups, respectively. We then utilize a linear variational graph auto-encoder network to learn the semantic features of groups, users, and items. In the MLKT stage, we introduce a multilayer perceptron (MLP) network for the recommendation task and design a meta-learning-based knowledge transfer algorithm to train an initialization parameter that incorporates user preference knowledge. In the MTD stage, we use the trained parameters as initialization parameters for all teacher networks. We evenly divide the group-item interactions among the teachers and sequentially train the teacher networks. Finally, we employ a multi-teacher distillation approach to train student networks with superior performance. Furthermore, we conduct extensive experiments on six real-world datasets to evaluate the effectiveness of our proposed model. The experimental findings demonstrate that our model, in comparison to state-of-the-art approaches, achieves superior performance in terms of group recommendation.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
    Zhang, Hailin
    Chen, Defang
    Wang, Can
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1943 - 1948
  • [2] Multi-Teacher Distillation With Single Model for Neural Machine Translation
    Liang, Xiaobo
    Wu, Lijun
    Li, Juntao
    Qin, Tao
    Zhang, Min
    Liu, Tie-Yan
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 992 - 1002
  • [3] A multi-teacher learning automata computing model for graph partitioning problems
    Ikebo, S
    Qian, F
    Hirata, H
    [J]. ELECTRICAL ENGINEERING IN JAPAN, 2004, 148 (01) : 46 - 53
  • [4] A multi-teacher learning automata computing model for graph partitioning problems
    Ikebo, Shigeya
    Qian, Fei
    Hirata, Hironori
    [J]. Electrical Engineering in Japan (English translation of Denki Gakkai Ronbunshi), 2004, 148 (01): : 46 - 53
  • [5] Adaptive multi-graph contrastive learning for bundle recommendation
    Tao, Qian
    Liu, Chenghao
    Xia, Yuhan
    Xu, Yong
    Li, Lusi
    [J]. Neural Networks, 2025, 181
  • [6] Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation
    Cao, Shengcao
    Li, Mengtian
    Hays, James
    Ramanan, Deva
    Wang, Yu-Xiong
    Gui, Liang-Yan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [7] Deep multi-graph neural networks with attention fusion for recommendation
    Song, Yuzhi
    Ye, Hailiang
    Li, Ming
    Cao, Feilong
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 191
  • [8] Adaptive multi-teacher multi-level knowledge distillation
    Liu, Yuang
    Zhang, Wei
    Wang, Jun
    [J]. NEUROCOMPUTING, 2020, 415 : 106 - 113
  • [9] Reinforced Multi-Teacher Selection for Knowledge Distillation
    Yuan, Fei
    Shou, Linjun
    Pei, Jian
    Lin, Wutao
    Gong, Ming
    Fu, Yan
    Jiang, Daxin
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 14284 - 14291
  • [10] Adaptive multi-teacher multi-level knowledge distillation
    Liu, Yuang
    Zhang, Wei
    Wang, Jun
    [J]. Neurocomputing, 2021, 415 : 106 - 113