PVD-FL: A Privacy-Preserving and Verifiable Decentralized Federated Learning Framework

被引:33
|
作者
Zhao, Jiaqi [1 ]
Zhu, Hui [1 ]
Wang, Fengwei [1 ]
Lu, Rongxing [2 ]
Liu, Zhe [3 ,4 ]
Li, Hui [1 ]
机构
[1] Xidian Univ, Sch Cyber Engn, Xian 710126, Shaanxi, Peoples R China
[2] Univ New Brunswick, Fac Comp Sci, Fredericton, NB E3B 5A3, Canada
[3] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 210016, Jiangsu, Peoples R China
[4] Zhejiang Lab, Hangzhou 311100, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational modeling; Training; Data models; Security; Data privacy; Privacy; Deep learning; Federated learning; privacy-preserving; verification; decentralized; efficiency; QUERY;
D O I
10.1109/TIFS.2022.3176191
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Over the past years, the increasingly severe data island problem has spawned an emerging distributed deep learning framework-federated learning, in which the global model can be constructed over multiple participants without directly sharing their raw data. Despite its promising prospect, there are still many security challenges in federated learning, such as privacy preservation and integrity verification. Furthermore, federated learning is usually performed with the assistance of a center, which is prone to cause trust worries and communicational bottlenecks. To tackle these challenges, in this paper, we propose a privacy-preserving and verifiable decentralized federated learning framework, named PVD-FL, which can achieve secure deep learning model training under a decentralized architecture. Specifically, we first design an efficient and verifiable cipher-based matrix multiplication (EVCM) algorithm to execute the most basic calculation in deep learning. Then, by employing EVCM, we design a suite of decentralized algorithms to construct the PVD-FL framework, which ensures the confidentiality of both global model and local update and the verification of every training step. Detailed security analysis shows that PVD-FL can well protect privacy against various inference attacks and guarantee training integrity. In addition, the extensive experiments on real-world datasets also demonstrate that PVD-FL can achieve lossless accuracy and practical performance.
引用
收藏
页码:2059 / 2073
页数:15
相关论文
共 50 条
  • [1] A Verifiable and Privacy-Preserving Federated Learning Training Framework
    Duan, Haohua
    Peng, Zedong
    Xiang, Liyao
    Hu, Yuncong
    Li, Bo
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 5046 - 5058
  • [2] Privacy-Preserving and Verifiable Federated Learning Framework for Edge Computing
    Zhou, Hao
    Yang, Geng
    Huang, Yuxian
    Dai, Hua
    Xiang, Yang
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 565 - 580
  • [3] A Privacy-Preserving and Verifiable Federated Learning Scheme
    Zhang, Xianglong
    Fu, Anmin
    Wang, Huaqun
    Zhou, Chunyi
    Chen, Zhenzhu
    [J]. ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [4] A verifiable and privacy-preserving framework for federated recommendation system
    Gao F.
    Zhang H.
    Lin J.
    Xu H.
    Kong F.
    Yang G.
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (04) : 4273 - 4287
  • [5] DeTrust-FL: Privacy-Preserving Federated Learning in Decentralized Trust Setting
    Xu, Runhua
    Baracaldo, Nathalie
    Zhou, Yi
    Anwar, Ali
    Kadhe, Swanand
    Ludwig, Heiko
    [J]. 2022 IEEE 15TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING (IEEE CLOUD 2022), 2022, : 417 - 426
  • [6] PVFL: Verifiable federated learning and prediction with privacy-preserving
    Yin, Benxin
    Zhang, Hanlin
    Lin, Jie
    Kong, Fanyu
    Yu, Leyun
    [J]. COMPUTERS & SECURITY, 2024, 139
  • [7] Privacy-Preserving and Reliable Decentralized Federated Learning
    Gao, Yuanyuan
    Zhang, Lei
    Wang, Lulu
    Choo, Kim-Kwang Raymond
    Zhang, Rui
    [J]. IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (04) : 2879 - 2891
  • [8] SVeriFL: Successive verifiable federated learning with privacy-preserving
    Gao, Hang
    He, Ningxin
    Gao, Tiegang
    [J]. INFORMATION SCIENCES, 2023, 622 : 98 - 114
  • [9] Privacy-Preserving Decentralized Aggregation for Federated Learning
    Jeon, Beomyeol
    Ferdous, S. M.
    Rahmant, Muntasir Raihan
    Walid, Anwar
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [10] GAIN: Decentralized Privacy-Preserving Federated Learning
    Jiang, Changsong
    Xu, Chunxiang
    Cao, Chenchen
    Chen, Kefei
    [J]. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2023, 78