Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning

被引:9
|
作者
Wu, Yusen [1 ]
Chen, Hao [1 ]
Wang, Xin [1 ]
Liu, Chao [1 ]
Nguyen, Phuong [1 ,2 ]
Yesha, Yelena [1 ,3 ]
机构
[1] Univ Maryland, Baltimore, MD 21201 USA
[2] 0PenKneck Inc, Halethorpe, MD USA
[3] Univ Miami, Coral Gables, FL 33124 USA
关键词
Data security; Byzantine-resilient SGD; Distributed ML;
D O I
10.1109/BigData52589.2021.9671583
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks attempt to disrupt the training, retraining, and utilizing of artificial intelligence (AI) and machine learning models in large-scale distributed machine learning systems. This causes security risks on its prediction outcome. For example, attackers attempt to poison the model by either presenting inaccurate misrepresentative data or altering the models' parameters. In addition, Byzantine faults including software, hardware, network issues occur in distributed systems which also lead to a negative impact on the prediction outcome. In this paper, we propose a novel distributed training algorithm, partial synchronous stochastic gradient descent (ParSGD), which defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate the effectiveness of our algorithm under three common adversarial attacks again the ML models and a Byzantine fault during the training phase. Our results show that using ParSGD, ML models can still produce accurate predictions as if it is not being attacked nor having failures at all when almost half of the nodes are being compromised or failed. We will report the experimental evaluations of ParSGD in comparison with other algorithms.
引用
收藏
页码:3380 / 3389
页数:10
相关论文
共 50 条
  • [2] SLC: A Permissioned Blockchain for Secure Distributed Machine Learning against Byzantine Attacks
    Liang, Lun
    Cao, Xianghui
    Zhang, Jun
    Sun, Changyin
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 7073 - 7078
  • [3] Adversarial attacks on medical machine learning
    Finlayson, Samuel G.
    Bowers, John D.
    Ito, Joichi
    Zittrain, Jonathan L.
    Beam, Andrew L.
    Kohane, Isaac S.
    SCIENCE, 2019, 363 (6433) : 1287 - 1289
  • [4] Enablers Of Adversarial Attacks in Machine Learning
    Izmailov, Rauf
    Sugrim, Shridatt
    Chadha, Ritu
    McDaniel, Patrick
    Swami, Ananthram
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 425 - 430
  • [5] Genuinely distributed Byzantine machine learning
    El-Mhamdi, El-Mahdi
    Guerraoui, Rachid
    Guirguis, Arsany
    Hoang, Le-Nguyen
    Rouault, Sebastien
    DISTRIBUTED COMPUTING, 2022, 35 (04) : 305 - 331
  • [6] Genuinely distributed Byzantine machine learning
    El-Mahdi El-Mhamdi
    Rachid Guerraoui
    Arsany Guirguis
    Lê-Nguyên Hoang
    Sébastien Rouault
    Distributed Computing, 2022, 35 : 305 - 331
  • [7] Detection of adversarial attacks on machine learning systems
    Judah, Matthew
    Sierchio, Jen
    Planer, Michael
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [8] Safe Machine Learning and Defeating Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javidi, Tara
    Koushanfar, Farinaz
    IEEE SECURITY & PRIVACY, 2019, 17 (02) : 31 - 38
  • [9] Reliable IoT Paradigm With Ensemble Machine Learning for Faults Diagnosis of Power Transformers Considering Adversarial Attacks
    Ali, Mahmoud N.
    Amer, Mohammed
    Elsisi, Mahmoud
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [10] Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial Environment
    Dong, Xingrong
    Wu, Zhaoxian
    Ling, Qing
    Tian, Zhi
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 235 - 248