Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

被引:0
|
作者
Zhou, Guanqiang [1 ,2 ]
Xu, Ping [3 ]
Wang, Yue [4 ]
Tian, Zhi [1 ]
机构
[1] George Mason Univ, Dept Elect & Comp Engn, Fairfax, VA 22030 USA
[2] Univ Iowa, Dept Elect & Comp Engn, Iowa City, IA 52242 USA
[3] Univ Texas Rio Grande Valley, Dept Elect & Comp Engn, Edinburg, TX 78539 USA
[4] Georgia State Univ, Dept Comp Sci, Atlanta, GA 30303 USA
基金
美国国家科学基金会;
关键词
NIST; Robustness; Distance learning; Computer aided instruction; Computational modeling; Convergence; Servers; Byzantine attacks; distributed learning; distributional shifts; norm-based screening (NBS); Wasserstein distance; OPTIMIZATION; MODELS;
D O I
10.1109/TNNLS.2024.3436149
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is (1/3) or higher, instead of (1/2), which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.
引用
下载
收藏
页数:15
相关论文
共 50 条
  • [41] Tutorial: Toward Robust Deep Learning against Poisoning Attacks
    Chen, Huili
    Koushanfar, Farinaz
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2023, 22 (03)
  • [42] Adaptive Robust Learning Against Backdoor Attacks in Smart Homes
    Zhang, Jiahui
    Wang, Zhuzhu
    Ma, Zhuoran
    Ma, Jianfeng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (13): : 23906 - 23916
  • [43] CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
    Xie, Chulin
    Chen, Minghao
    Chen, Pin-Yu
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [44] RoFL: A Robust Federated Learning Scheme Against Malicious Attacks
    Wei, Ming
    Liu, Xiaofan
    Ren, Wei
    WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 : 277 - 291
  • [45] FedChallenger: Challenge-Response-based Defence for Federated Learning against Byzantine Attacks
    Moyeen, M. A.
    Kaur, Kuljeet
    Agarwal, Anjali
    Manzano, Ricardo S.
    Zaman, Marzia
    Goel, Nishith
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 3843 - 3848
  • [46] Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning
    Tian, Yuchen
    Zhang, Weizhe
    Simpson, Andrew
    Liu, Yang
    Jiang, Zoe Lin
    COMPUTER JOURNAL, 2023, 66 (03): : 711 - 726
  • [47] NetShield: An in-network architecture against byzantine failures in distributed deep learning
    Ren Q.
    Zhu S.
    Lu L.
    Li Z.
    Zhao G.
    Zhang Y.
    Computer Networks, 2023, 237
  • [48] Stable Adversarial Learning under Distributional Shifts
    Liu, Jiashuo
    Shen, Zheyan
    Cui, Peng
    Zhou, Linjun
    Kuang, Kun
    Li, Bo
    Lin, Yishi
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8662 - 8670
  • [49] Robust and Resilient Distributed Optimal Frequency Control for Microgrids Against Cyber Attacks
    Liu, Yun
    Li, Yuanzheng
    Wang, Yu
    Zhang, Xian
    Gooi, Hoay Beng
    Xin, Huanhai
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (01) : 375 - 386
  • [50] Federated Learning Stability Under Byzantine Attacks
    Gouissem, A.
    Abualsaud, K.
    Yaacoub, E.
    Khattab, T.
    Guizani, M.
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 572 - 577