Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning

被引:9
|
作者
Wu, Yusen [1 ]
Chen, Hao [1 ]
Wang, Xin [1 ]
Liu, Chao [1 ]
Nguyen, Phuong [1 ,2 ]
Yesha, Yelena [1 ,3 ]
机构
[1] Univ Maryland, Baltimore, MD 21201 USA
[2] 0PenKneck Inc, Halethorpe, MD USA
[3] Univ Miami, Coral Gables, FL 33124 USA
关键词
Data security; Byzantine-resilient SGD; Distributed ML;
D O I
10.1109/BigData52589.2021.9671583
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks attempt to disrupt the training, retraining, and utilizing of artificial intelligence (AI) and machine learning models in large-scale distributed machine learning systems. This causes security risks on its prediction outcome. For example, attackers attempt to poison the model by either presenting inaccurate misrepresentative data or altering the models' parameters. In addition, Byzantine faults including software, hardware, network issues occur in distributed systems which also lead to a negative impact on the prediction outcome. In this paper, we propose a novel distributed training algorithm, partial synchronous stochastic gradient descent (ParSGD), which defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate the effectiveness of our algorithm under three common adversarial attacks again the ML models and a Byzantine fault during the training phase. Our results show that using ParSGD, ML models can still produce accurate predictions as if it is not being attacked nor having failures at all when almost half of the nodes are being compromised or failed. We will report the experimental evaluations of ParSGD in comparison with other algorithms.
引用
收藏
页码:3380 / 3389
页数:10
相关论文
共 50 条
  • [41] Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
    Rosenberg, Ishai
    Shabtai, Asaf
    Elovici, Yuval
    Rokach, Lior
    ACM COMPUTING SURVEYS, 2021, 54 (05)
  • [42] Stealing Machine Learning Models: Attacks and Countermeasures for Generative Adversarial Networks
    Hu, Hailong
    Pang, Jun
    37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, : 1 - 16
  • [43] Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware
    Demetrio, Luca
    Biggio, Battista
    Roli, Fabio
    IEEE SECURITY & PRIVACY, 2022, 20 (05) : 77 - 85
  • [44] Adversarial attacks on machine learning cybersecurity defences in Industrial Control Systems
    Anthi, Eirini
    Williams, Lowri
    Rhode, Matilda
    Burnap, Pete
    Wedgbury, Adam
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2021, 58
  • [45] A Network Security Classifier Defense: Against Adversarial Machine Learning Attacks
    De Lucia, Michael J.
    Cotton, Chase
    PROCEEDINGS OF THE 2ND ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING, WISEML 2020, 2020, : 67 - 73
  • [46] A Systematic Review of Adversarial Machine Learning Attacks, Defensive Controls, and Technologies
    Malik, Jasmita
    Muthalagu, Raja
    Pawar, Pranav M.
    IEEE ACCESS, 2024, 12 : 99382 - 99421
  • [47] Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems
    Newaz, A. K. M. Iqtidar
    Haque, Nur Imtiazul
    Sikder, Amit Kumar
    Rahman, Mohammad Ashiqur
    Uluagac, A. Selcuk
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [48] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [49] Detection and Mitigation of Byzantine Attacks in Distributed Training
    Konstantinidis, Konstantinos
    Vaswani, Namrata
    Ramamoorthy, Aditya
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (02) : 1493 - 1508
  • [50] Secure and Resilient Distributed Machine Learning Under Adversarial Environments
    Zhang, Rui
    Zhu, Quanyan
    IEEE AEROSPACE AND ELECTRONIC SYSTEMS MAGAZINE, 2016, 31 (03) : 34 - 36