MAFD: A Federated Distillation Approach with Multi-head Attention for Recommendation Tasks

被引:0
|
作者
Wu, Aming [1 ]
Kwon, Young-Woo [1 ]
机构
[1] Kyungpook Natl Univ, Daegu, South Korea
基金
新加坡国家研究基金会;
关键词
Federated learning; Multi-head attention; Wasserstein distance; Recommendation systems;
D O I
10.1145/3555776.3577849
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The key challenges that recommendation systems must overcome are data isolation and privacy protection issues. Federated learning can efficiently train global models using decentralized data while preserving privacy. In real-world applications, however, it is difficult to achieve high prediction accuracy due to the heterogeneity of devices, the lack of data, and the limited generalization capacity of models. In this research, we introduce a personalized federated knowledge distillation model for a recommendation system based on a multi-head attention mechanism for recommendation systems. Specifically, we first employ federated distillation to improve the performance of student models and introduce a multi-head attention mechanism to enhance user encoding information. Next, we incorporate Wasserstein distance into the objective function of combined distillation to reduce the distribution gap between teacher and student networks and also use an adaptive learning rate technique to enhance convergence. We show that the proposed approach achieves better effectiveness and robustness through benchmarks.
引用
收藏
页码:1221 / 1224
页数:4
相关论文
共 50 条
  • [1] Enhancing Recommendation Capabilities Using Multi-Head Attention-Based Federated Knowledge Distillation
    Wu, Aming
    Kwon, Young-Woo
    [J]. IEEE ACCESS, 2023, 11 : 45850 - 45861
  • [2] Combining Multi-Head Attention and Sparse Multi-Head Attention Networks for Session-Based Recommendation
    Zhao, Zhiwei
    Wang, Xiaoye
    Xiao, Yingyuan
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] Personalized federated learning based on multi-head attention algorithm
    Jiang, Shanshan
    Lu, Meixia
    Hu, Kai
    Wu, Jiasheng
    Li, Yaogen
    Weng, Liguo
    Xia, Min
    Lin, Haifeng
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (11) : 3783 - 3798
  • [4] Personalized federated learning based on multi-head attention algorithm
    Shanshan Jiang
    Meixia Lu
    Kai Hu
    Jiasheng Wu
    Yaogen Li
    Liguo Weng
    Min Xia
    Haifeng Lin
    [J]. International Journal of Machine Learning and Cybernetics, 2023, 14 : 3783 - 3798
  • [5] Neural News Recommendation with Multi-Head Self-Attention
    Wu, Chuhan
    Wu, Fangzhao
    Ge, Suyu
    Qi, Tao
    Huang, Yongfeng
    Xie, Xing
    [J]. 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 6389 - 6394
  • [6] Leveraging mixed distribution of multi-head attention for sequential recommendation
    Zhang, Yihao
    Liu, Xiaoyang
    [J]. APPLIED INTELLIGENCE, 2023, 53 (01) : 454 - 469
  • [7] Leveraging mixed distribution of multi-head attention for sequential recommendation
    Yihao Zhang
    Xiaoyang Liu
    [J]. Applied Intelligence, 2023, 53 : 454 - 469
  • [8] Personalized News Recommendation with CNN and Multi-Head Self-Attention
    Li, Aibin
    He, Tingnian
    Guo, Yi
    Li, Zhuoran
    Rong, Yixuan
    Liu, Guoqi
    [J]. 2022 IEEE 13TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2022, : 102 - 108
  • [9] Hybrid graph convolutional networks with multi-head attention for location recommendation
    Ting Zhong
    Shengming Zhang
    Fan Zhou
    Kunpeng Zhang
    Goce Trajcevski
    Jin Wu
    [J]. World Wide Web, 2020, 23 : 3125 - 3151
  • [10] Sequential Recommendation Using Deep Reinforcement Learning and Multi-Head Attention
    Sultan, Raneem
    Abu-Elkheir, Mervat
    [J]. 2022 56TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2022, : 258 - 262