FaiRecSys: mitigating algorithmic bias in recommender systems

被引:0
|
作者
Bora Edizel
Francesco Bonchi
Sara Hajian
André Panisson
Tamir Tassa
机构
[1] Pompeu Fabra University,
[2] ISI Foundation,undefined
[3] Eurecat,undefined
[4] The Open University,undefined
关键词
Algorithmic bias; Recommender systems; Fairness; Privacy;
D O I
暂无
中图分类号
学科分类号
摘要
Recommendation and personalization are useful technologies which influence more and more our daily decisions. However, as we show empirically in this paper, the bias that exists in the real world and which is reflected in the training data can be modeled and amplified by recommender systems and in the end returned as biased recommendations to the users. This feedback process creates a self-perpetuating loop which progressively strengthens the filter bubbles we live in. Biased recommendations can also reinforce stereotypes such as those based on gender or ethnicity, possibly resulting in disparate impact. In this paper we address the problem of algorithmic bias in recommender systems. In particular, we highlight the connection between predictability of sensitive features and bias in the results of recommendations and we then offer a theoretically founded bound on recommendation bias based on that connection. We continue to formalize a fairness constraint and the price that one has to pay, in terms of alterations in the recommendation matrix, in order to achieve fair recommendations. Finally, we propose FaiRecSys—an algorithm that mitigates algorithmic bias by post-processing the recommendation matrix with minimum impact on the utility of recommendations provided to the end-users.
引用
收藏
页码:197 / 213
页数:16
相关论文
共 50 条
  • [41] Characterizing and Mitigating the Impact of Data Imbalance for Stakeholders in Recommender Systems
    Gomez, Elizabeth
    RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, : 756 - 757
  • [42] Mitigating Cognitive Bias with Decision Support Systems
    Kueper, A.
    Lodde, G.
    Livingstone, E.
    Schadendorf, D.
    Kraemer, N.
    JOURNAL DER DEUTSCHEN DERMATOLOGISCHEN GESELLSCHAFT, 2022, 20 : 64 - 64
  • [43] The Potential of Diverse Youth as Stakeholders in Identifying and Mitigating Algorithmic Bias for a Future of Fairer AI
    Solyst J.
    Yang E.
    Xie S.
    Ogan A.
    Hammer J.
    Eslami M.
    Proceedings of the ACM on Human-Computer Interaction, 2023, 7 (CSCW2)
  • [44] Mitigating social bias in sentiment classification via ethnicity-aware algorithmic design
    Corizzo, Roberto
    Hafner, Franziska Sofia
    SOCIAL NETWORK ANALYSIS AND MINING, 2024, 14 (01)
  • [45] Exploring Categorizations of Algorithmic Affordances in Graphical User Interfaces of Recommender Systems
    Bartels, Ester
    Smits, Aletta
    Detweiler, Chris
    van der Stappen, Esther
    van Rossen, Suzanne
    Shayan, Shakila
    Pott, Katja
    Cardona, Karine
    Ziegler, Jurgen
    van Turnhout, Koen
    DESIGN FOR EQUALITY AND JUSTICE, INTERACT 2023, PT II, 2024, 14536 : 173 - 184
  • [46] Algorithmic Affordances in Recommender Interfaces
    Smits, Aletta
    Bartels, Ester
    Detweiler, Chris
    van Turnhout, Koen
    HUMAN-COMPUTER INTERACTION - INTERACT 2023, PT IV, 2023, 14145 : 605 - 609
  • [47] Coping with Homogeneous Information Flow in Recommender Systems: Algorithmic Resistance and Avoidance
    Zhang, Liang
    Bi, Wenjing
    Zhang, Ning
    He, Lifeng
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024, 40 (22) : 6899 - 6912
  • [48] Enhancing Calibration and Reducing Popularity Bias in Recommender Systems
    de Souza, Rodrigo Ferrari
    Manzato, Marcelo Garcia
    ENTERPRISE INFORMATION SYSTEMS, ICEIS 2023, PT II, 2024, 519 : 3 - 24
  • [49] The Effect of Explanations and Algorithmic Accuracy on Visual Recommender Systems of Artistic Images
    Dominguez, Vicente
    Messina, Pablo
    Donoso-Guzman, Ivania
    Parra, Denis
    PROCEEDINGS OF IUI 2019, 2019, : 408 - 416
  • [50] Algorithmic modeling of public recommender systems: insights from selected cities
    Kamolov, Sergei
    Aleksandrov, Nikita
    TRANSFORMING GOVERNMENT- PEOPLE PROCESS AND POLICY, 2023, 17 (01) : 72 - 86