Boosting Communication Efficiency of Federated Learning's Secure Aggregation

被引:0
|
作者
Nazemi, Niousha [1 ]
Tavallaie, Omid [1 ]
Chen, Shuaijun [1 ]
Zomaya, Albert Y. [1 ]
Holz, Ralph [1 ,2 ]
机构
[1] Univ Sydney, Sch Comp Sci, Sydney, NSW, Australia
[2] Univ Munster, Fac Math & Comp Sci, Munster, Germany
关键词
Federated Learning (FL); Secure Aggregation (SecAgg); Communication Efficiency;
D O I
10.1109/DSN-S60304.2024.00045
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a decentralized machine learning approach where client devices train models locally and send them to a server that performs aggregation to generate a global model. FL is vulnerable to model inversion attacks, where the server can infer sensitive client data from trained models. Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue by masking each client's trained model using shared secrets and individual elements generated locally on the client's device. Although SecAgg effectively preserves privacy, it imposes considerable communication and computation overhead, especially as network size increases. Building upon SecAgg, this poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol that substantially reduces this overhead by using only two shared secrets per client to mask the model. We propose our method for stable networks with low delay variation and limited client dropouts. CESA is independent of the data distribution and network size (for higher than 6 nodes), preventing the honest-but-curious server from accessing unmasked models. Our initial evaluation reveals that CESA significantly reduces the communication cost compared to SecAgg.
引用
收藏
页码:157 / 158
页数:2
相关论文
共 50 条
  • [41] CodedPaddedFL and CodedSecAgg: Straggler Mitigation and Secure Aggregation in Federated Learning
    Schlegel, Reent
    Kumar, Siddhartha
    Rosnes, Eirik
    Graell i Amat, Alexandre
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (04) : 2013 - 2027
  • [42] Accountable and Verifiable Secure Aggregation for Federated Learning in IoT Networks
    Yang, Xiaoyi
    Zhao, Yanqi
    Chen, Dian
    Yu, Yong
    Du, Xiaojiang
    Guizani, Mohsen
    IEEE NETWORK, 2022, 36 (05): : 173 - 179
  • [43] Secure Aggregation is Insecure: Category Inference Attack on Federated Learning
    Gao, Jiqiang
    Hou, Boyu
    Guo, Xiaojie
    Liu, Zheli
    Zhang, Ying
    Chen, Kai
    Li, Jin
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (01) : 147 - 160
  • [44] Secure and efficient multi-key aggregation for federated learning
    Li, Yanling
    Lai, Junzuo
    Zhang, Rong
    Sun, Meng
    INFORMATION SCIENCES, 2024, 654
  • [45] A Flexible and Scalable Malicious Secure Aggregation Protocol for Federated Learning
    Tang, Jinling
    Xu, Haixia
    Wang, Mingsheng
    Tang, Tao
    Peng, Chunying
    Liao, Huimei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 4174 - 4187
  • [46] Evaluating the Communication Efficiency in Federated Learning Algorithms
    Asad, Muhammad
    Moustafa, Ahmed
    Ito, Takayuki
    Aslam, Muhammad
    PROCEEDINGS OF THE 2021 IEEE 24TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN (CSCWD), 2021, : 552 - 557
  • [47] Communication and computation efficiency in Federated Learning: A survey
    Almanifi, Omair Rashed Abdulwareth
    Chow, Chee-Onn
    Tham, Mau-Luen
    Chuah, Joon Huang
    Kanesan, Jeevan
    INTERNET OF THINGS, 2023, 22
  • [48] Learning from Failures: Secure and Fault-Tolerant Aggregation for Federated Learning
    Mansouri, Mohamad
    Onen, Melek
    Ben Jaballah, Wafa
    PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 146 - 158
  • [49] Privacy-Preserving Machine Learning Using Federated Learning and Secure Aggregation
    Lia, Dragos
    Togan, Mihai
    PROCEEDINGS OF THE 2020 12TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE (ECAI-2020), 2020,
  • [50] Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning
    Bouacida, Nader
    Hou, Jiahui
    Zang, Hui
    Liu, Xin
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,