FedLaw: Value-Aware Federated Learning With Individual Fairness and Coalition Stability

被引:0
|
作者
Lu, Jianfeng [1 ]
Zhang, Hangjian [2 ]
Zhou, Pan [3 ]
Wang, Xiong [4 ]
Wang, Chen [5 ]
Wu, Dapeng Oliver [6 ]
机构
[1] Wuhan Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430065, Peoples R China
[2] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua 321004, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Cyber Sci & Engn, Wuhan 430074, Peoples R China
[4] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[5] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[6] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational modeling; Training; Data models; Servers; Stability analysis; Optimization; Accuracy; Federated learning (FL); contribution evaluation; model aggregation; individual fairness; coalition stability; CORE;
D O I
10.1109/TETCI.2024.3446458
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A long-standing problem remains with the heterogeneous clients in Federated Learning (FL), who often have diverse gains and requirements for the trained model, while their contributions are hard to evaluate due to the privacy-preserving training. Existing works mainly rely on single-dimension metric to calculate clients' contributions as aggregation weights, which however may damage the social fairness, thus discouraging the cooperation willingness of worse-off clients and causing the revenue instability. To tackle this issue, we propose a novel incentive mechanism named FedLaw to effectively evaluate clients' contributions and further assign aggregation weights. Specifically, we reuse the local model updates and model the contribution evaluation process as a convex coalition game among multiple players with a non-empty core. By deriving a closed-form expression of the Shapley value, we solve the game core in quadratic time. Moreover, we theoretically prove that FedLaw guarantees individual fairness, coalition stability, computational efficiency, collective rationality, redundancy, symmetry, additivity, strict desirability, and individual monotonicity, and also show that FedLaw can achieve a constant convergence bound. Extensive experiments on four real-world datasets validate the superiority of FedLaw in terms of model aggregation, fairness, and time overhead compared to the state-of-the-art five baselines. Experimental results show that FedLaw is able to reduce the computation time of contribution evaluation by about 12 times and improve the global model performance by about 2% while ensuring fairness.
引用
收藏
页码:1049 / 1062
页数:14
相关论文
共 27 条
  • [1] Value-Aware Collaborative Data Pricing for Federated Learning in Vehicular Networks
    Hui, Yilong
    Hu, Jie
    Xiao, Xiao
    Cheng, Nan
    Luan, Tom H.
    AD HOC NETWORKS AND TOOLS FOR IT, ADHOCNETS 2021, 2022, 428 : 289 - 300
  • [2] Value-Aware Active Learning
    Sayin, Burcu
    Yang, Jie
    Passerini, Andrea
    Casati, Fabio
    HHAI 2023: AUGMENTING HUMAN INTELLECT, 2023, 368 : 215 - 223
  • [3] Iterative Value-Aware Model Learning
    Farahmand, Amir-massoud
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [4] Towards Fairness-Aware Federated Learning
    Shi, Yuxin
    Yu, Han
    Leung, Cyril
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 11922 - 11938
  • [5] FAIRNESS-AWARE CLIENT SELECTION FOR FEDERATED LEARNING
    Shi, Yuxin
    Liu, Zelei
    Shi, Zhuan
    Yu, Han
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 324 - 329
  • [6] A Fairness-aware Incentive Scheme for Federated Learning
    Yu, Han
    Liu, Zelei
    Liu, Yang
    Chen, Tianjian
    Cong, Mingshu
    Weng, Xi
    Niyato, Dusit
    Yang, Qiang
    PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020, 2020, : 393 - 399
  • [7] VALUE-AWARE IMPORTANCE WEIGHTING FOR OFF-POLICY REINFORCEMENT LEARNING
    De Asis, Kristopher
    Graves, Eric
    Sutton, Richard S.
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 745 - 763
  • [8] Value-Aware Loss Function for Model-based Reinforcement Learning
    Farahmand, Amir-massoud
    Barreto, Andre M. S.
    Nikovski, Daniel N.
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 54, 2017, 54 : 1486 - 1494
  • [9] Fairness in Federated Learning via Core-Stability
    Chaudhury, Bhaskar Ray
    Li, Linyi
    Kang, Mintong
    Li, Bo
    Mehta, Ruta
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [10] Fairness-Aware Reverse Auction-Based Federated Learning
    Tang, Xiaoli
    Yu, Han
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (07): : 8862 - 8872