FedLaw: Value-Aware Federated Learning With Individual Fairness and Coalition Stability

被引:0
|
作者
Lu, Jianfeng [1 ]
Zhang, Hangjian [2 ]
Zhou, Pan [3 ]
Wang, Xiong [4 ]
Wang, Chen [5 ]
Wu, Dapeng Oliver [6 ]
机构
[1] Wuhan Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430065, Peoples R China
[2] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua 321004, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Cyber Sci & Engn, Wuhan 430074, Peoples R China
[4] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[5] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[6] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational modeling; Training; Data models; Servers; Stability analysis; Optimization; Accuracy; Federated learning (FL); contribution evaluation; model aggregation; individual fairness; coalition stability; CORE;
D O I
10.1109/TETCI.2024.3446458
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A long-standing problem remains with the heterogeneous clients in Federated Learning (FL), who often have diverse gains and requirements for the trained model, while their contributions are hard to evaluate due to the privacy-preserving training. Existing works mainly rely on single-dimension metric to calculate clients' contributions as aggregation weights, which however may damage the social fairness, thus discouraging the cooperation willingness of worse-off clients and causing the revenue instability. To tackle this issue, we propose a novel incentive mechanism named FedLaw to effectively evaluate clients' contributions and further assign aggregation weights. Specifically, we reuse the local model updates and model the contribution evaluation process as a convex coalition game among multiple players with a non-empty core. By deriving a closed-form expression of the Shapley value, we solve the game core in quadratic time. Moreover, we theoretically prove that FedLaw guarantees individual fairness, coalition stability, computational efficiency, collective rationality, redundancy, symmetry, additivity, strict desirability, and individual monotonicity, and also show that FedLaw can achieve a constant convergence bound. Extensive experiments on four real-world datasets validate the superiority of FedLaw in terms of model aggregation, fairness, and time overhead compared to the state-of-the-art five baselines. Experimental results show that FedLaw is able to reduce the computation time of contribution evaluation by about 12 times and improve the global model performance by about 2% while ensuring fairness.
引用
收藏
页码:1049 / 1062
页数:14
相关论文
共 27 条
  • [11] Price of Stability in Quality-Aware Federated Learning
    Yan, Yizhou
    Tang, Xinyu
    Huang, Chao
    Tang, Ming
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 734 - 739
  • [12] Fairness-aware loss history based federated learning heuristic algorithm
    Mollanejad, Amir
    Navin, Ahmad Habibizad
    Ghanbari, Shamsollah
    KNOWLEDGE-BASED SYSTEMS, 2024, 288
  • [13] Balanced Federated Semisupervised Learning With Fairness-Aware Pseudo-Labeling
    Wei, Xiao-Xiang
    Huang, Hua
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (07) : 9395 - 9407
  • [14] Reputation-Aware Hedonic Coalition Formation for Efficient Serverless Hierarchical Federated Learning
    Ng, Jer Shyuan
    Lim, Wei Yang Bryan
    Xiong, Zehui
    Cao, Xianbin
    Jin, Jiangming
    Niyato, Dusit
    Leung, Cyril
    Miao, Chunyan
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) : 2675 - 2686
  • [15] Multi-objective federated learning: Balancing global performance and individual fairness
    Shen, Yuhao
    Xi, Wei
    Cai, Yunyun
    Fan, Yuwei
    Yang, He
    Zhao, Jizhong
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2025, 162
  • [16] Value of Information and Timing-aware Scheduling for Federated Learning
    Khan, Muhammad Azeem
    Yang, Howard H.
    Chen, Zihan
    Iera, Antonio
    Pappas, Nikolaos
    2023 IEEE CONFERENCE ON STANDARDS FOR COMMUNICATIONS AND NETWORKING, CSCN, 2023, : 94 - 99
  • [17] SFFL: Self-aware fairness federated learning framework for heterogeneous data distributions
    Zhang, Jiale
    Li, Ye
    Wu, Di
    Zhao, Yanchao
    Palaiahnakote, Shivakumara
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 269
  • [18] FedAVE: Adaptive data value evaluation framework for collaborative fairness in federated learning
    Wang, Zihui
    Peng, Zhaopeng
    Fan, Xiaoliang
    Wang, Zheng
    Wu, Shangbin
    Yu, Rongshan
    Yang, Peizhen
    Zheng, Chuanpan
    Wang, Cheng
    NEUROCOMPUTING, 2024, 574
  • [19] Fairness-Aware Federated Learning With Unreliable Links in Resource-Constrained Internet of Things
    Li, Zhidu
    Zhou, Yujie
    Wu, Dapeng
    Tang, Tong
    Wang, Ruyan
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (18) : 17359 - 17371
  • [20] Value-aware meta-transfer learning and convolutional mask attention networks for reservoir identification with limited data
    Chen, Bingyang
    Zeng, Xingjie
    Zhou, Jiehan
    Zhang, Weishan
    Cao, Shaohua
    Zhang, Baoyu
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 223