Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

被引:0
|
作者
Zhou, Guanqiang [1 ,2 ]
Xu, Ping [3 ]
Wang, Yue [4 ]
Tian, Zhi [1 ]
机构
[1] George Mason Univ, Dept Elect & Comp Engn, Fairfax, VA 22030 USA
[2] Univ Iowa, Dept Elect & Comp Engn, Iowa City, IA 52242 USA
[3] Univ Texas Rio Grande Valley, Dept Elect & Comp Engn, Edinburg, TX 78539 USA
[4] Georgia State Univ, Dept Comp Sci, Atlanta, GA 30303 USA
基金
美国国家科学基金会;
关键词
NIST; Robustness; Distance learning; Computer aided instruction; Computational modeling; Convergence; Servers; Byzantine attacks; distributed learning; distributional shifts; norm-based screening (NBS); Wasserstein distance; OPTIMIZATION; MODELS;
D O I
10.1109/TNNLS.2024.3436149
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is (1/3) or higher, instead of (1/2), which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.
引用
下载
收藏
页数:15
相关论文
共 50 条
  • [21] Communication-Efficient and Byzantine-Robust Distributed Learning
    Ghosh, Avishek
    Maity, Raj Kumar
    Kadhe, Swanand
    Mazumdar, Arya
    Ramchandran, Kannan
    2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [22] Making chord robust to Byzantine attacks
    Fiat, A
    Saia, J
    Young, M
    ALGORITHMS - ESA 2005, 2005, 3669 : 803 - 814
  • [23] Distributed Detection in the Presence of Byzantine Attacks
    Marano, Stefano
    Matta, Vincenzo
    Tong, Lang
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2009, 57 (01) : 16 - 29
  • [24] Backdoor attacks against distributed swarm learning
    Chen, Kongyang
    Zhang, Huaiyuan
    Feng, Xiangyu
    Zhang, Xiaoting
    Mi, Bing
    Jin, Zhiping
    ISA TRANSACTIONS, 2023, 141 : 59 - 72
  • [25] FABA: An Algorithm for Fast Aggregation against Byzantine Attacks in Distributed Neural Networks
    Xia, Qi
    Tao, Zeyi
    Hao, Zijiang
    Li, Qun
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4824 - 4830
  • [26] Robust and privacy-preserving federated learning with distributed additive encryption against poisoning attacks
    Zhang, Fan
    Huang, Hui
    Chen, Zhixiong
    Huang, Zhenjie
    COMPUTER NETWORKS, 2024, 245
  • [27] A Four-Pronged Defense Against Byzantine Attacks in Federated Learning
    Wan, Wei
    Hu, Shengshan
    Li, Minghui
    Lu, Jianrong
    Zhang, Longling
    Zhang, Leo Yu
    Jin, Hai
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 7394 - 7402
  • [28] A trust distributed learning (D-NWDAF) against poisoning and byzantine attacks in B5G networks
    Ben Saad, Sabra
    SECURITY AND PRIVACY, 2024,
  • [29] Distributed collaborative filtering for robust recommendations against shilling attacks
    Ji, Ae-Ttie
    Yeon, Cheol
    Kim, Heung-Nam
    Jo, Geun-Sik
    ADVANCES IN ARTIFICIAL INTELLIGENCE, 2007, 4509 : 14 - +
  • [30] Byzantine-robust distributed sparse learning for M-estimation
    Jiyuan Tu
    Weidong Liu
    Xiaojun Mao
    Machine Learning, 2023, 112 : 3773 - 3804