Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

被引:0
|
作者
Zhou, Guanqiang [1 ,2 ]
Xu, Ping [3 ]
Wang, Yue [4 ]
Tian, Zhi [1 ]
机构
[1] George Mason Univ, Dept Elect & Comp Engn, Fairfax, VA 22030 USA
[2] Univ Iowa, Dept Elect & Comp Engn, Iowa City, IA 52242 USA
[3] Univ Texas Rio Grande Valley, Dept Elect & Comp Engn, Edinburg, TX 78539 USA
[4] Georgia State Univ, Dept Comp Sci, Atlanta, GA 30303 USA
基金
美国国家科学基金会;
关键词
NIST; Robustness; Distance learning; Computer aided instruction; Computational modeling; Convergence; Servers; Byzantine attacks; distributed learning; distributional shifts; norm-based screening (NBS); Wasserstein distance; OPTIMIZATION; MODELS;
D O I
10.1109/TNNLS.2024.3436149
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is (1/3) or higher, instead of (1/2), which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.
引用
下载
收藏
页数:15
相关论文
共 50 条
  • [31] Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
    Yin, Dong
    Chen, Yudong
    Ramchandran, Kannan
    Bartlett, Peter
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [32] Byzantine-robust distributed sparse learning for M-estimation
    Tu, Jiyuan
    Liu, Weidong
    Mao, Xiaojun
    MACHINE LEARNING, 2023, 112 (10) : 3773 - 3804
  • [33] Detection and Mitigation of Byzantine Attacks in Distributed Training
    Konstantinidis, Konstantinos
    Vaswani, Namrata
    Ramamoorthy, Aditya
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (02) : 1493 - 1508
  • [34] An efficient and robust deep learning based network anomaly detection against distributed denial of service attacks
    Kasim, Omer
    COMPUTER NETWORKS, 2020, 180
  • [35] Resilient Mechanism Against Byzantine Failure for Distributed Deep Reinforcement Learning
    Zhang, Mingyue
    Jin, Zhi
    Hou, Jian
    Luo, Renwei
    2022 IEEE 33RD INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING (ISSRE 2022), 2022, : 378 - 389
  • [36] Communication-Efficient and Byzantine-Robust Distributed Learning with Error Feedback
    Ghosh A.
    Maity R.K.
    Kadhe S.
    Mazumdar A.
    Ramchandran K.
    IEEE Journal on Selected Areas in Information Theory, 2021, 2 (03): : 942 - 953
  • [37] Asynchronous Byzantine-Robust Stochastic Aggregation with Variance Reduction for Distributed Learning
    Zhu, Zehan
    Huang, Yan
    Zhao, Chengcheng
    Xu, Jinming
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 151 - 158
  • [38] Communication-efficient and Byzantine-robust distributed learning with statistical guarantee
    Zhou, Xingcai
    Chang, Le
    Xu, Pengfei
    Lv, Shaogao
    PATTERN RECOGNITION, 2023, 137
  • [39] Stochastic alternating direction method of multipliers for Byzantine-robust distributed learning
    Lin, Feng
    Li, Weiyu
    Ling, Qing
    SIGNAL PROCESSING, 2022, 195
  • [40] Model poisoning attacks against distributed machine learning systems
    Tomsett, Richard
    Chan, Kevin
    Chakraborty, Supriyo
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006