FedLaw: Value-Aware Federated Learning With Individual Fairness and Coalition Stability

被引:0
|
作者
Lu, Jianfeng [1 ]
Zhang, Hangjian [2 ]
Zhou, Pan [3 ]
Wang, Xiong [4 ]
Wang, Chen [5 ]
Wu, Dapeng Oliver [6 ]
机构
[1] Wuhan Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430065, Peoples R China
[2] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua 321004, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Cyber Sci & Engn, Wuhan 430074, Peoples R China
[4] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[5] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[6] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational modeling; Training; Data models; Servers; Stability analysis; Optimization; Accuracy; Federated learning (FL); contribution evaluation; model aggregation; individual fairness; coalition stability; CORE;
D O I
10.1109/TETCI.2024.3446458
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A long-standing problem remains with the heterogeneous clients in Federated Learning (FL), who often have diverse gains and requirements for the trained model, while their contributions are hard to evaluate due to the privacy-preserving training. Existing works mainly rely on single-dimension metric to calculate clients' contributions as aggregation weights, which however may damage the social fairness, thus discouraging the cooperation willingness of worse-off clients and causing the revenue instability. To tackle this issue, we propose a novel incentive mechanism named FedLaw to effectively evaluate clients' contributions and further assign aggregation weights. Specifically, we reuse the local model updates and model the contribution evaluation process as a convex coalition game among multiple players with a non-empty core. By deriving a closed-form expression of the Shapley value, we solve the game core in quadratic time. Moreover, we theoretically prove that FedLaw guarantees individual fairness, coalition stability, computational efficiency, collective rationality, redundancy, symmetry, additivity, strict desirability, and individual monotonicity, and also show that FedLaw can achieve a constant convergence bound. Extensive experiments on four real-world datasets validate the superiority of FedLaw in terms of model aggregation, fairness, and time overhead compared to the state-of-the-art five baselines. Experimental results show that FedLaw is able to reduce the computation time of contribution evaluation by about 12 times and improve the global model performance by about 2% while ensuring fairness.
引用
收藏
页码:1049 / 1062
页数:14
相关论文
共 27 条
  • [21] Toward Fairness-Aware Time-Sensitive Asynchronous Federated Learning for Critical Energy Infrastructure
    Lu, Jianfeng
    Liu, Haibo
    Zhang, Zhao
    Wang, Jiangtao
    Goudos, Sotirios K.
    Wan, Shaohua
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (05) : 3462 - 3472
  • [22] Hybrid Value-Aware Transformer Architecture for Joint Learning from Longitudinal and Non-Longitudinal Clinical Data
    Shao, Yijun
    Cheng, Yan
    Nelson, Stuart J.
    Kokkinos, Peter
    Zamrini, Edward Y.
    Ahmed, Ali
    Zeng-Treitler, Qing
    JOURNAL OF PERSONALIZED MEDICINE, 2023, 13 (07):
  • [23] Fairness-Aware Multi-Server Federated Learning Task Delegation Over Wireless Networks
    Gao, Yulan
    Ren, Chao
    Yu, Han
    Xiao, Ming
    Skoglund, Mikael
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2025, 12 (02): : 684 - 697
  • [24] FairFed: Improving Fairness and Efficiency of Contribution Evaluation in Federated Learning via Cooperative Shapley Value
    Liu, Yiqi
    Chang, Shan
    Liu, Ye
    Li, Bo
    Wang, Cong
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 621 - 630
  • [25] Lyapunov-Guided Long-Term Fairness-Aware Federated Learning for Collaborative TinyML on Edge Devices
    Lu, Jianfeng
    Sheng, Yuhang
    Cao, Shuqin
    Elnaffar, Said
    Saad, Malik Muhammad
    Seid, Abegaz Mohammed
    Erbad, Aiman
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (04) : 7334 - 7345
  • [26] MINDFL: Mitigating the Impact of Imbalanced and Noisy-labeled Data in Federated Learning with Quality and Fairness-Aware Client Selection
    Zhang, Chaoyu
    Wang, Ning
    Shi, Shanghao
    Du, Changlai
    Lou, Wenjing
    Hou, Y. Thomas
    MILCOM 2023 - 2023 IEEE MILITARY COMMUNICATIONS CONFERENCE, 2023,
  • [27] MR-FFL: A Stratified Community-Based Mutual Reliability Framework for Fairness-Aware Federated Learning in Heterogeneous UAV Networks
    Zhou, Zan
    Zhuang, Yirong
    Li, Hongjing
    Huang, Sizhe
    Yang, Shujie
    Guo, Peng
    Zhong, Lujie
    Yuan, Zhenhui
    Xu, Changqiao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12): : 20995 - 21009