Boosting Communication Efficiency of Federated Learning's Secure Aggregation

被引:0
|
作者
Nazemi, Niousha [1 ]
Tavallaie, Omid [1 ]
Chen, Shuaijun [1 ]
Zomaya, Albert Y. [1 ]
Holz, Ralph [1 ,2 ]
机构
[1] Univ Sydney, Sch Comp Sci, Sydney, NSW, Australia
[2] Univ Munster, Fac Math & Comp Sci, Munster, Germany
关键词
Federated Learning (FL); Secure Aggregation (SecAgg); Communication Efficiency;
D O I
10.1109/DSN-S60304.2024.00045
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a decentralized machine learning approach where client devices train models locally and send them to a server that performs aggregation to generate a global model. FL is vulnerable to model inversion attacks, where the server can infer sensitive client data from trained models. Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue by masking each client's trained model using shared secrets and individual elements generated locally on the client's device. Although SecAgg effectively preserves privacy, it imposes considerable communication and computation overhead, especially as network size increases. Building upon SecAgg, this poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol that substantially reduces this overhead by using only two shared secrets per client to mask the model. We propose our method for stable networks with low delay variation and limited client dropouts. CESA is independent of the data distribution and network size (for higher than 6 nodes), preventing the honest-but-curious server from accessing unmasked models. Our initial evaluation reveals that CESA significantly reduces the communication cost compared to SecAgg.
引用
收藏
页码:157 / 158
页数:2
相关论文
共 50 条
  • [31] RVFL: Rational Verifiable Federated Learning Secure Aggregation Protocol
    Mu, Xianyu
    Tian, Youliang
    Zhou, Zhou
    Wang, Shuai
    Xiong, Jinbo
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (14): : 25147 - 25161
  • [32] Towards Efficient Secure Aggregation for Model Update in Federated Learning
    Wu, Danye
    Pan, Miao
    Xu, Zhiwei
    Zhang, Yujun
    Han, Zhu
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [33] The Poisson Binomial Mechanism for Unbiased Federated Learning with Secure Aggregation
    Chen, Wei-Ning
    Ozgur, Ayfer
    Kairouz, Peter
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [34] Secure fair aggregation based on category grouping in federated learning
    Zhou, Jie
    Hu, Jinlin
    Xue, Jiajun
    Zeng, Shengke
    INFORMATION FUSION, 2025, 117
  • [35] SHIELD - Secure Aggregation Against Poisoning in Hierarchical Federated Learning
    Siriwardhana, Yushan
    Porambage, Pawani
    Liyanage, Madhusanka
    Marchal, Samuel
    Ylianttila, Mika
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1845 - 1863
  • [36] Fast Secure Aggregation With High Dropout Resilience for Federated Learning
    Yang, Shisong
    Chen, Yuwen
    Yang, Zhen
    Li, Bowen
    Liu, Huan
    IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2023, 7 (03): : 1501 - 1514
  • [37] Efficient and Secure Federated Learning With Verifiable Weighted Average Aggregation
    Yang, Zhen
    Zhou, Ming
    Yu, Haiyang
    Sinnott, Richard O.
    Liu, Huan
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (01): : 205 - 222
  • [38] SASH: Efficient Secure Aggregation Based on SHPRG For Federated Learning
    Liu, Zizhen
    Chen, Si
    Ye, Jing
    Fan, Junfeng
    Li, Huawei
    Li, Xiaowei
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 1243 - 1252
  • [39] FedGT: Identification of Malicious Clients in Federated Learning With Secure Aggregation
    Xhemrishi, Marvin
    Oestman, Johan
    Wachter-Zeh, Antonia
    Graell i Amat, Alexandre
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2577 - 2592
  • [40] The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning
    Chen, Wei-Ning
    Choquette-Choo, Christopher A.
    Kairouz, Peter
    Suresh, Ananda Theertha
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,