Debias the Black-Box: A Fair Ranking Framework via Knowledge Distillation

被引:0
|
作者
Zhu, Zhitao [1 ,2 ]
Si, Shijing [3 ]
Wang, Jianzong [1 ]
Yang, Yaodong [4 ]
Xiao, Jing [1 ]
机构
[1] Ping Technol Shenzhen Co Ltd, Shenzhen, Peoples R China
[2] Univ Sci & Technol China, IAT, Hefei, Peoples R China
[3] Shanghai Int Studies Univ, Sch Econ & Finance, Shanghai, Peoples R China
[4] Peking Univ, Inst AI, Beijing, Peoples R China
关键词
Information Retrieval; Knowledge distillation; Fairness; Learning to rank; Exposure;
D O I
10.1007/978-3-031-20891-1_28
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks can capture the intricate interaction history information between queries and documents, because of their many complicated nonlinear units, allowing them to provide correct search recommendations. However, service providers frequently face more complex obstacles in real-world circumstances, such as deployment cost constraints and fairness requirements. Knowledge distillation, which transfers the knowledge of a well-trained complex model (teacher) to a simple model (student), has been proposed to alleviate the former concern, but the best current distillation methods focus only on how to make the student model imitate the predictions of the teacher model. To better facilitate the application of deep models, we propose a fair information retrieval framework based on knowledge distillation. This framework can improve the exposure-based fairness of models while considerably decreasing model size. Our extensive experiments on three huge datasets show that our proposed framework can reduce the model size to a minimum of 1% of its original size while maintaining its black-box state. It also improves fairness performance by 15%-46% while keeping a high level of recommendation effectiveness.
引用
收藏
页码:395 / 405
页数:11
相关论文
共 50 条
  • [1] Black-Box Few-Shot Knowledge Distillation
    Dang Nguyen
    Gupta, Sunil
    Do, Kien
    Venkatesh, Svetha
    COMPUTER VISION, ECCV 2022, PT XXI, 2022, 13681 : 196 - 211
  • [2] Towards Black-Box Explainability with Gaussian Discriminant Knowledge Distillation
    Haselhoff, Anselm
    Kronenberger, Jan
    Kueppers, Fabian
    Schneider, Jonas
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 21 - 28
  • [3] Fair Wrapping for Black-box Predictions
    Soen, Alexander
    Alabdulmohsin, Ibrahim
    Koyejo, Sanmi
    Mansour, Yishay
    Moorosi, Nyalleng
    Nock, Richard
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [4] FedAL: Black-Box Federated Knowledge Distillation Enabled by Adversarial Learning
    Han, Pengchao
    Shi, Xingyan
    Huang, Jianwei
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2024, 42 (11) : 3064 - 3077
  • [5] Improving Diversity in Black-Box Few-Shot Knowledge Distillation
    Vo, Tri-Nhan
    Nguyen, Dang
    Do, Kien
    Gupta, Sunil
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT II, ECML PKDD 2024, 2024, 14942 : 178 - 196
  • [6] AKD: Using Adversarial Knowledge Distillation to Achieve Black-box Attacks
    Lian, Xin
    Huang, Zhiqiu
    Wang, Chao
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [7] SUBSTITUTE MODEL GENERATION FOR BLACK-BOX ADVERSARIAL ATTACK BASED ON KNOWLEDGE DISTILLATION
    Cui, Weiyu
    Li, Xiaorui
    Huang, Jiawei
    Wang, Wenyi
    Wang, Shuai
    Chen, Jianwen
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 648 - 652
  • [8] Ranking-Based Black-Box Complexity
    Doerr, Benjamin
    Winzen, Carola
    ALGORITHMICA, 2014, 68 (03) : 571 - 609
  • [9] Ranking-Based Black-Box Complexity
    Benjamin Doerr
    Carola Winzen
    Algorithmica, 2014, 68 : 571 - 609
  • [10] Black-Box Non-Black-Box Zero Knowledge
    Goyal, Vipul
    Ostrovsky, Rafail
    Scafuro, Alessandra
    Visconti, Ivan
    STOC'14: PROCEEDINGS OF THE 46TH ANNUAL 2014 ACM SYMPOSIUM ON THEORY OF COMPUTING, 2014, : 515 - 524