Cryptanalysis and Improvement of DeepPAR: Privacy-Preserving and Asynchronous Deep Learning for Industrial IoT

被引:4
|
作者
Chen, Yange [1 ,2 ]
He, Suyu [3 ]
Wang, Baocang [4 ,5 ]
Duan, Pu [6 ]
Zhang, Benyu [6 ]
Hong, Zhiyong [7 ,8 ]
Ping, Yuan [2 ]
机构
[1] Xidian Univ, Sch Telecommun Engn, Xian 710071, Peoples R China
[2] Xuchang Univ, Sch Informat Engn, Xuchang 461000, Peoples R China
[3] Shanghai Jiyin Network Technol Co Ltd, Backend Engn Res & Dev Dept, Shanghai 200000, Peoples R China
[4] Xidian Univ, Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[5] Xidian Univ, Cryptog Res Ctr, Xian 710071, Peoples R China
[6] Ant Grp, Secure Collaborat Intelligence Lab, Hangzhou 310000, Peoples R China
[7] Wuyi Univ, Fac Intelligence Manufacture, Jiangmen 529020, Peoples R China
[8] Wuyi Univ, Yue Gang Ao Ind Big Data Collaborat Innovat Ctr, Jiangmen 529020, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2022年 / 9卷 / 21期
基金
中国国家自然科学基金;
关键词
Deep learning; Servers; Training; Privacy; Industrial Internet of Things; Production; Homomorphic encryption; Asynchronous deep learning; homomorphic encryption; privacy preserving; proxy re-encryption; ENCRYPTION; PROTOCOLS;
D O I
10.1109/JIOT.2022.3181665
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Industrial Internet of Things (IIoT) is gradually changing the mode of traditional industries with the rapid development of big data. Besides, thanks to the development of deep learning, it can be used to extract useful knowledge from the large amount of data in the IIoT to help improve production and service quality. However, the lack of large-scale data sets will lead to low performance and overfitting of learning models. Therefore, federated deep learning with distributed data sets has been proposed. Nevertheless, the research has shown that federated learning can also leak the private data of participants. In IIoT, once the privacy of participants in some special application scenarios is leaked, it will directly affect national security and people's lives, such as smart power grid and smart medical care. At present, several privacy-preserving federated learning schemes have been proposed to preserve data privacy of participants, but security issues prevent them from being fully applied. In this article, we analyze the security of the DeepPAR scheme proposed by Zhang et al., and point out that the scheme is insecure in the re-encryption key generation process, which will cause the leakage of the secret key of participants or the proxy server. In addition, the scheme is not resistant to collusion attacks between the parameter server and participants. Based on this, we propose an improved scheme. The security proof shows that the improved scheme solves the security problem of the original scheme and is resistant to collusion attacks. Finally, the security and accuracy of the scheme is illustrated by performance analysis.
引用
收藏
页码:21958 / 21970
页数:13
相关论文
共 50 条
  • [31] Privacy-Preserving Machine Learning Training in IoT Aggregation Scenarios
    Zhu, Liehuang
    Tang, Xiangyun
    Shen, Meng
    Gao, Feng
    Zhang, Jie
    Du, Xiaojiang
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (15) : 12106 - 12118
  • [32] Privacy-Preserving Asynchronous Vertical Federated Learning Algorithms for Multiparty Collaborative Learning
    Gu, Bin
    Xu, An
    Huo, Zhouyuan
    Deng, Cheng
    Huang, Heng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) : 6103 - 6115
  • [33] Privacy-Preserving Classification on Deep Learning with Exponential Mechanism
    Ju, Quan
    Xia, Rongqing
    Li, Shuhong
    Zhang, Xiaojian
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [34] Towards Efficient and Privacy-preserving Federated Deep Learning
    Hao, Meng
    Li, Hongwei
    Xu, Guowen
    Liu, Sen
    Yang, Haomiao
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [35] A comprehensive survey and taxonomy on privacy-preserving deep learning
    Tran, Anh-Tu
    Luong, The-Dung
    Huynh, Van-Nam
    NEUROCOMPUTING, 2024, 576
  • [36] Privacy-Preserving Deep Learning With Homomorphic Encryption: An Introduction
    Falcetta, Alessandro
    Roveri, Manuel
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (03) : 14 - 25
  • [37] Privacy-preserving Deep Learning based Record Linkage
    Ranbaduge T.
    Vatsalan D.
    Ding M.
    IEEE Transactions on Knowledge and Data Engineering, 2024, 36 (11) : 1 - 12
  • [38] Privacy-Preserving Deep Learning via Weight Transmission
    Le Trieu Phong
    Tran Thi Phuong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2019, 14 (11) : 3003 - 3015
  • [39] Privacy-Preserving Federated Deep Learning With Irregular Users
    Xu, Guowen
    Li, Hongwei
    Zhang, Yun
    Xu, Shengmin
    Ning, Jianting
    Deng, Robert H.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (02) : 1364 - 1381
  • [40] Privacy-Preserving Collaborative Deep Learning With Unreliable Participants
    Zhao, Lingchen
    Wang, Qian
    Zou, Qin
    Zhang, Yan
    Chen, Yanjiao
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 1486 - 1500