Towards Long-term Fairness in Recommendation

被引:110
|
作者
Ge, Yingqiang [1 ]
Liu, Shuchang [1 ]
Gao, Ruoyuan [1 ]
Xian, Yikun [1 ]
Li, Yunqi [1 ]
Zhao, Xiangyu [2 ]
Pei, Changhua [3 ]
Sun, Fei [3 ]
Ge, Junfeng [3 ]
Ou, Wenwu [3 ]
Zhang, Yongfeng [1 ]
机构
[1] Rutgers State Univ, Newark, NY 07102 USA
[2] Michigan State Univ, E Lansing, MI 48824 USA
[3] Alibaba Grp, Shenzhen, Peoples R China
关键词
Recommender System; Long-term Fairness; Reinforcement Learning; Constrained Policy Optimization; Unbiased Recommendation;
D O I
10.1145/3437963.3441824
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As Recommender Systems (RS) influence more and more people in their daily life, the issue of fairness in recommendation is becoming more and more important. Most of the prior approaches to fairness-aware recommendation have been situated in a static or one-shot setting, where the protected groups of items are fixed, and the model provides a one-time fairness solution based on fairness-constrained optimization. This fails to consider the dynamic nature of the recommender systems, where attributes such as item popularity may change over time due to the recommendation policy and user engagement. For example, products that were once popular may become no longer popular, and vice versa. As a result, the system that aims to maintain long-term fairness on the item exposure in different popularity groups must accommodate this change in a timely fashion. Novel to this work, we explore the problem of long-term fair-ness in recommendation and accomplish the problem through dynamic fairness learning. We focus on the fairness of exposure of items in different groups, while the division of the groups is based on item popularity, which dynamically changes over time in the recommendation process. We tackle this problem by proposing a fairness-constrained reinforcement learning algorithm for recommendation, which models the recommendation problem as a Constrained Markov Decision Process (CMDP), so that the model can dynamically adjust its recommendation policy to make sure the fairness requirement is always satisfied when the environment changes. Experiments on several real-world datasets verify our framework's superiority in terms of recommendation performance, short-term fairness, and long-term fairness.
引用
收藏
页码:445 / 453
页数:9
相关论文
共 50 条
  • [41] The effects of the experience recommendation on short- and long-term happiness
    Maria Sääksjärvi
    Katarina Hellén
    Pieter Desmet
    Marketing Letters, 2016, 27 : 675 - 686
  • [42] The effects of the experience recommendation on short- and long-term happiness
    Saaksjarvi, Maria
    Hellen, Katarina
    Desmet, Pieter
    MARKETING LETTERS, 2016, 27 (04) : 675 - 686
  • [43] On achieving short-term QoS and long-term fairness in high speed networks
    Kim, MH
    Park, HS
    JOURNAL OF HIGH SPEED NETWORKS, 2004, 13 (03) : 233 - 248
  • [44] Towards long-term depolarized interactive recommendations
    Lechiakh, Mohamed
    El-Moutaouakkil, Zakaria
    Maurer, Alexandre
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (06)
  • [45] Towards a risk culture in the very long-term
    Gauduel, YA
    ACTUALITE CHIMIQUE, 2005, : 1 - 1
  • [46] Maximum Entropy Policy for Long-Term Fairness in Interactive Recommender Systems
    Shi, Xiaoyu
    Liu, Quanliang
    Xie, Hong
    Bai, Yanan
    Shang, Mingsheng
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (03) : 1029 - 1043
  • [47] A Fairness-Based Heuristic Technique for Long-Term Nurse Scheduling
    Senbel, Samah
    ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH, 2021, 38 (02)
  • [48] Towards the millennium: long-term therapy in gynecology
    Thomas, EJ
    INTERNATIONAL JOURNAL OF GYNECOLOGY & OBSTETRICS, 1999, 64 : S41 - S42
  • [49] Towards long-term infrastructure system performance
    Blom, Carron M.
    Guthrie, Peter M.
    PROCEEDINGS OF THE INSTITUTION OF CIVIL ENGINEERS-ENGINEERING SUSTAINABILITY, 2017, 170 (03) : 157 - 168
  • [50] EUROPE - TOWARDS A NEW LONG-TERM PROGRAM
    GIBSON, R
    SPACE POLICY, 1985, 1 (01) : 3 - 6