Multi-teacher knowledge distillation for debiasing recommendation with uniform data

被引:0
|
作者
Yang, Xinxin [1 ]
Li, Xinwei [1 ]
Liu, Zhen [1 ]
Yuan, Yafan [1 ]
Wang, Yannan [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp Sci & Technol, Beijing 100044, Peoples R China
基金
中国国家自然科学基金;
关键词
Debiasing recommendation; Knowledge distillation; Contrastive learning; Collaborative filtering;
D O I
10.1016/j.eswa.2025.126808
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies have highlighted the bias problem in recommender systems which affects the learning of users' true preferences. One significant reason for bias is that the training data is missing not at random (MNAR). While existing approaches have demonstrated the usefulness of uniform data that is missing at random (MAR) for debiasing, the current models lack a comprehensive exploration of unbiased features within uniform data. Considering the valuableness and limited size of uniform data, this paper proposes a multi-teacher knowledge distillation framework (UKDRec) to extract and transfer more unbiased information from uniform data. The proposed framework consists of two components: a label-based teacher model that leverages supervision signals and a feature-based teacher model that facilitates the transfer of comprehensive unbiased features. To effectively extract unbiased features, we introduce a contrastive learning strategy that combines the uniform data with control data. The framework is trained using a multi-task learning approach, which enhances the transfer of unbiased knowledge. Extensive experiments conducted on real-world datasets demonstrate the superior debiasing performance of our approach compared to competitive baselines.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Correlation Guided Multi-teacher Knowledge Distillation
    Shi, Luyao
    Jiang, Ning
    Tang, Jialiang
    Huang, Xinlei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 562 - 574
  • [2] Reinforced Multi-Teacher Selection for Knowledge Distillation
    Yuan, Fei
    Shou, Linjun
    Pei, Jian
    Lin, Wutao
    Gong, Ming
    Fu, Yan
    Jiang, Daxin
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 14284 - 14291
  • [3] Knowledge Distillation via Multi-Teacher Feature Ensemble
    Ye, Xin
    Jiang, Rongxin
    Tian, Xiang
    Zhang, Rui
    Chen, Yaowu
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 566 - 570
  • [4] CONFIDENCE-AWARE MULTI-TEACHER KNOWLEDGE DISTILLATION
    Zhang, Hailin
    Chen, Defang
    Wang, Can
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4498 - 4502
  • [5] Adaptive multi-teacher multi-level knowledge distillation
    Liu, Yuang
    Zhang, Wei
    Wang, Jun
    Neurocomputing, 2021, 415 : 106 - 113
  • [6] Adaptive multi-teacher multi-level knowledge distillation
    Liu, Yuang
    Zhang, Wei
    Wang, Jun
    NEUROCOMPUTING, 2020, 415 : 106 - 113
  • [7] Knowledge Distillation via Multi-Teacher Feature Ensemble
    Ye, Xin
    Jiang, Rongxin
    Tian, Xiang
    Zhang, Rui
    Chen, Yaowu
    IEEE Signal Processing Letters, 2024, 31 : 566 - 570
  • [8] Decoupled Multi-teacher Knowledge Distillation based on Entropy
    Cheng, Xin
    Tang, Jialiang
    Zhang, Zhiqiang
    Yu, Wenxin
    Jiang, Ning
    Zhou, Jinjia
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [9] Anomaly detection based on multi-teacher knowledge distillation
    Ma, Ye
    Jiang, Xu
    Guan, Nan
    Yi, Wang
    JOURNAL OF SYSTEMS ARCHITECTURE, 2023, 138
  • [10] Robust Semantic Segmentation With Multi-Teacher Knowledge Distillation
    Amirkhani, Abdollah
    Khosravian, Amir
    Masih-Tehrani, Masoud
    Kashiani, Hossein
    IEEE ACCESS, 2021, 9 : 119049 - 119066