Research and Challenge of Distributed Deep Learning Privacy and Security Attack

被引:0
|
作者
Zhou C. [1 ,2 ]
Chen D. [1 ]
Wang S. [1 ]
Fu A. [1 ,2 ]
Gao Y. [1 ]
机构
[1] School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing
[2] Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, Guilin
基金
中国国家自然科学基金;
关键词
Backdoor attack; Deep learning; Distributed deep learning; Privacy attack; Privacy protection;
D O I
10.7544/issn1000-1239.2021.20200966
中图分类号
学科分类号
摘要
Different from the centralized deep learning mode, distributed deep learning gets rid of the limitation that the data must be centralized during the model training process, which realizes the local operation of the data, and allows all participants to collaborate without exchanging data. It significantly reduces the risk of user privacy leakage, breaks the data island from the technical level, and improves the efficiency of deep learning. Distributed deep learning can be widely used in smart medical care, smart finance, smart retail and smart transportation. However, typical attacks such as generative adversarial network attacks, membership inference attacks and backdoor attacks, have revealed that distributed deep learning still has serious privacy vulnerabilities and security threats. This paper first compares and analyzes the characteristics of the three distributed deep learning modes and their core problems, including collaborative learning, federated learning and split learning. Secondly, from the perspective of privacy attacks, it comprehensively expounds various types of privacy attacks faced by distributed deep learning, and summarizes the existing privacy attack defense methods. At the same time, from the perspective of security attacks, the paper analyzes the attack process and inherent security threats of the three security attacks: data poisoning attacks, adversarial sample attacks, and backdoor attacks, and analyzes the existing security attack defense technology from the perspectives of defense principles, adversary capabilities, and defense effects. Finally, from the perspective of privacy and security attacks, the future research directions of distributed deep learning are discussed and prospected. © 2021, Science Press. All right reserved.
引用
下载
收藏
页码:927 / 943
页数:16
相关论文
共 68 条
  • [1] Zhang Lei, Cui Yong, Liu Jing, Et al., Application of machine learning in cyberspace security research, Chinese Journal of Computers, 41, 9, pp. 1943-1975, (2018)
  • [2] Yang Qiang, AI and data privacy protection: The way to federated learning, Journal of Information Security Research, 5, 11, pp. 961-965, (2019)
  • [3] Zhang Yuqing, Dong Ying, Liu Caiyun, Et al., The status, trends and prospects of deep learning in cyberspace security, Journal of Computer Research and Development, 55, 6, pp. 1117-1142, (2018)
  • [4] Qi Jia, Lin Keguo, Zhan Pengjin, Et al., Preserving model privacy for machine learning in distributed systems, IEEE Transactions on Parallel and Distributed Systems, 29, 8, pp. 1808-1822, (2018)
  • [5] Chen Yufei, Shen Chao, Wang Qian, Et al., Security and privacy risks in artificial intelligence systems, Journal of Computer Research and Development, 56, 10, pp. 2135-2150, (2019)
  • [6] Fu Anmin, Zhang Xianglong, Xiong Naixue, Et al., VFL: A verifiable federated learning with privacy-preserving for big data in industrial IoT, IEEE Transactions on Industrial Informatics, (2020)
  • [7] Zhou Lei, Fu Anmin, Yang Guomin, Et al., Efficient certificateless multi-copy integrity auditing scheme supporting data dSVMynamics[J/OL], IEEE Transactions on Dependable and Secure Computing
  • [8] Yang Qiang, Liu Yang, Chen Tianjian, Et al., Federated machine learning: Concept and applications, ACM Transactions on Intelligent Systems and Technology, 10, 2, pp. 1-19, (2019)
  • [9] Hitaj B, Ateniese G, Perez-Cruz F., Deep models under the GAN: Information leakage from collaborative deep learning, Proc of the 2017 ACM SIGSAC Conf on Computer and Communications Security, pp. 603-618, (2017)
  • [10] Shokri R, Stronati M, Song C, Et al., Membership inference attacks against machine learning models, Proc of the IEEE Symp Security and Privacy, pp. 3-18, (2017)