Learning Fast and Slow: Towards Inclusive Federated Learning

被引:0
|
作者
Munir, Muhammad Tahir [1 ]
Saeed, Muhammad Mustansar [1 ]
Ali, Mahad [1 ]
Qazi, Zafar Ayyub [1 ]
Raza, Agha Ali [1 ]
Qazi, Ihsan Ayyub [1 ]
机构
[1] Lahore Univ Management Sci, Dept Comp Sci, Lahore, Pakistan
关键词
Federated Learning; Fairness; Robustness; Developing Countries;
D O I
10.1007/978-3-031-43415-0_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today's deep learning systems rely on large amounts of useful data to make accurate predictions. Often such data is private and thus not readily available due to rising privacy concerns. Federated learning (FL) tackles this problem by training a shared model locally on devices to aid learning in a privacy-preserving manner. Unfortunately, FL's effectiveness degrades when model training involves clients with heterogeneous devices; a common case especially in developing countries. Slow clients are dropped in FL, which not only limits learning but also systematically excludes slow clients thereby potentially biasing results. We propose Hasaas; a system that tackles this challenge by adapting the model size for slow clients based on their hardware resources. By doing so, Hasaas obviates the need to drop slow clients, which improves model accuracy and fairness. To improve robustness in the presence of statistical heterogeneity, Hasaas uses insights from the Central Limit Theorem to estimate model parameters in every round. Experimental evaluation involving large-scale simulations and a small-scale real testbed shows that Hasaas provides robust performance in terms of test accuracy, fairness, and convergence times compared to state-of-the-art schemes.
引用
收藏
页码:384 / 401
页数:18
相关论文
共 50 条
  • [1] Learning, fast and slow
    Meister, Markus
    CURRENT OPINION IN NEUROBIOLOGY, 2022, 75
  • [2] Learning, Fast or Slow
    Barber, Brad M.
    Lee, Yi-Tsung
    Liu, Yu-Jane
    Odean, Terrance
    Zhang, Ke
    REVIEW OF ASSET PRICING STUDIES, 2020, 10 (01): : 61 - 93
  • [3] Towards Fair Federated Learning
    Zhou, Zirui
    Chu, Lingyang
    Liu, Changxin
    Wang, Lanjun
    Pei, Jian
    Zhang, Yong
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4100 - 4101
  • [4] Towards Federated Learning by Kernels
    Shin, Kilho
    Seito, Takenobu
    Liu, Chris
    2024 10TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND ROBOTICS ENGINEERING, ICMRE, 2024, : 317 - 323
  • [5] Federated Reinforcement Learning For Fast Personalization
    Nadiger, Chetan
    Kumar, Anil
    Abdelhak, Sherine
    2019 IEEE SECOND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE), 2019, : 123 - 127
  • [6] Fast-Convergent Federated Learning
    Nguyen, Hung T.
    Sehwag, Vikash
    Hosseinalipour, Seyyedali
    Brinton, Christopher G.
    Chiang, Mung
    Poor, H. Vincent
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (01) : 201 - 218
  • [7] Towards Fast and Stable Federated Learning: Confronting Heterogeneity via Knowledge Anchor
    Chen, Jinqian
    Zhu, Jihua
    Zheng, Qinghai
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8697 - 8706
  • [8] Towards a Federated Fuzzy Learning System
    Wilbik, Anna
    Grefen, Paul
    IEEE CIS INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS 2021 (FUZZ-IEEE), 2021,
  • [9] Towards Federated Learning on the Quantum Internet
    Suenkel, Leo
    Koelle, Michael
    Rohe, Tobias
    Gabor, Thomas
    COMPUTATIONAL SCIENCE, ICCS 2024, PT VI, 2024, 14937 : 330 - 344
  • [10] Towards Efficient Decentralized Federated Learning
    Pappas, Christodoulos
    Papadopoulos, Dimitrios
    Chatzopoulos, Dimitris
    Panagou, Eleni
    Lalis, Spyros
    Vavalis, Manolis
    2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS WORKSHOPS (ICDCSW), 2022, : 79 - 85