FLaPS: Federated Learning and Privately Scaling

被引:3
|
作者
Paul, Sudipta [1 ,2 ]
Sengupta, Poushali [3 ]
Mishra, Subhankar [1 ,2 ]
机构
[1] Natl Inst Sci Educ & Res, Sch Comp Sci, Bhubaneswar, India
[2] Homi Bhabha Natl Inst, Mumbai 400094, Maharashtra, India
[3] Univ Kalyani, Dept Stat, Kalyani 741235, W Bengal, India
关键词
Federated Learning(FL); Distributed Learning(DL); Differential Privacy(DP); Federated Averaging(FedAvg); k-means Clustering;
D O I
10.1109/MASS50613.2020.00011
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a distributed learning process where the model (weights and checkpoints) is transferred to the devices that posses data rather than the classical way of transferring and aggregating the data centrally. In this way, sensitive data does not leave the user devices. FL uses the FedAvg algorithm, which is trained in the iterative model averaging way, on the non-iid and unbalanced distributed data, without depending on the data quantity. Some issues with the FL are, 1) no scalability, as the model is iteratively trained over all the devices, which amplifies with device drops; 2) security and privacy trade-off of the learning process still not robust enough and 3) overall communication efficiency and the cost are higher. To mitigate these challenges we present Federated Learning and Privately Scaling (FLaPS) architecture, which improves scalability as well as the security and privacy of the system. The devices are grouped into clusters which further gives better privacy scaled turn around time to finish a round of training. Therefore, even if a device gets dropped in the middle of training, the whole process can be started again after a definite amount of time. The data and model both are communicated using differentially private reports with iterative shuffling which provides a better privacy-utility trade-off. We evaluated FLaPS on MMST, CIFAR10, and TINY-IMAGENET-200 dataset using various CNN models. Experimental results prove FLaPS to be an improved, time and privacy scaled environment having better and comparable after-learning-parameters with respect to the central and FL models.
引用
收藏
页码:13 / 19
页数:7
相关论文
共 50 条
  • [1] Robust federated learning with voting and scaling
    Liang, Xiang-Yu
    Zhang, Heng-Ru
    Tang, Wei
    Min, Fan
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 153 : 113 - 124
  • [2] PRIVATELY CUSTOMIZING PREFINETUNING TO BETTER MATCH USER DATA IN FEDERATED LEARNING
    Hou, Charlie
    Zhan, Hongyuan
    Shrivastava, Akshat
    Wang, Sid
    Livshits, Sasha
    Fanti, Giulia
    Lazar, Daniel
    [J]. arXiv, 2023,
  • [3] SCALING NEUROSCIENCE RESEARCH USING FEDERATED LEARNING
    Stripelis, Dimitris
    Ambite, Jose Luis
    Lam, Pradeep
    Thompson, Paul
    [J]. 2021 IEEE 18TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2021, : 1191 - 1195
  • [4] Scaling-up medical vision-and-language representation learning with federated learning
    Lu, Siyu
    Liu, Zheng
    Liu, Tianlin
    Zhou, Wangchunshu
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 126
  • [5] Scaling Federated Learning for Fine-Tuning of Large Language Models
    Hilmkil, Agrin
    Callh, Sebastian
    Barbieri, Matteo
    Sutfeld, Leon Rene
    Zec, Edvin Listo
    Mogren, Olof
    [J]. NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2021), 2021, 12801 : 15 - 23
  • [6] Scaling Language Model Size in Cross-Device Federated Learning
    Ro, Jae Hun
    Breiner, Theresa
    McConnaughey, Lara
    Chen, Mingqing
    Suresh, Ananda Theertha
    Kumar, Shankar
    Mathews, Rajiv
    [J]. PROCEEDINGS OF THE FIRST WORKSHOP ON FEDERATED LEARNING FOR NATURAL LANGUAGE PROCESSING (FL4NLP 2022), 2022, : 6 - 20
  • [7] Privately Learning Subspaces
    Singhal, Vikrant
    Steinke, Thomas
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [8] Scaling Data Analysis Services in an Edge-based Federated Learning Environment
    Catalfamo, Alessio
    Carnevale, Lorenzo
    Galletta, Antonino
    Martella, Francesco
    Celesti, Antonio
    Fazio, Maria
    Villari, Massimo
    [J]. 2022 IEEE/ACM 15TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING, UCC, 2022, : 167 - 172
  • [9] Boosting Privately: Federated Extreme Gradient Boosting for Mobile Crowdsensing
    Liu, Yang
    Ma, Zhuo
    Liu, Ximeng
    Ma, Siqi
    Nepal, Surya
    Deng, Robert H.
    Ren, Kui
    [J]. 2020 IEEE 40TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2020, : 1 - 11
  • [10] Federated Learning
    Ray, Niranjan K.
    Puthal, Deepak
    Ghai, Dhruva
    [J]. IEEE CONSUMER ELECTRONICS MAGAZINE, 2021, 10 (06) : 106 - 107