A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent

被引:0
|
作者
Wang, Shuche [1 ]
Tan, Vincent Y. F. [2 ,3 ]
机构
[1] Natl Univ Singapore, Inst Operat Res & Analyt, Singapore 117602, Singapore
[2] Natl Univ Singapore, Dept Math, Dept Elect & Comp Engn, Singapore 117602, Singapore
[3] Natl Univ Singapore, Inst Operat Res & Analyt, Singapore 117602, Singapore
关键词
Vectors; Servers; Noise measurement; Signal processing algorithms; Machine learning algorithms; Convergence; Uplink; Mirrors; Downlink; Machine learning; Distributed gradient descent; mirror descent; adversarial corruptions; noisy channels; convergence rates;
D O I
10.1109/TSP.2025.3539883
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Distributed gradient descent algorithms have come to the fore in modern machine learning, especially in parallelizing the handling of large datasets that are distributed across several workers. However, scant attention has been paid to analyzing the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions instead of random noise. In this paper, we formulate a novel problem in which adversarial corruptions are present in a distributed learning system. We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm. Extensive convergence analysis for (strongly) convex loss functions is provided for different choices of the stepsize. We carefully optimize the stepsize schedule to accelerate the convergence of the algorithm, while at the same time amortizing the effect of the corruption over time. Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
引用
收藏
页码:827 / 842
页数:16
相关论文
共 50 条
  • [41] Gradient Descent-Based Adaptive Learning Control for Autonomous Underwater Vehicles With Unknown Uncertainties
    Qiu, Jianbin
    Ma, Min
    Wang, Tong
    Gao, Huijun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5266 - 5273
  • [42] Trainable Proximal Gradient Descent-Based Channel Estimation for mmWave Massive MIMO Systems
    Zheng, Peicong
    Lyu, Xuantao
    Gong, Yi
    INTERNATIONAL JOURNAL OF ACCOUNTING AND INFORMATION MANAGEMENT, 2023, 12 (10) : 1781 - 1785
  • [43] Regret Analysis of Online Gradient Descent-based Iterative Learning Control with Model Mismatch
    Balta, Efe C.
    Iannelli, Andrea
    Smith, Roy S.
    Lygeros, John
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 1479 - 1484
  • [44] Effective Gradient Descent-Based Chroma Subsampling Method for Bayer CFA Images in HEVC
    Chung, Kuo-Liang
    Lee, Yu-Ling
    Chien, Wei-Che
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (11) : 3281 - 3290
  • [45] An iterative gradient descent-based reinforcement learning policy for active control of structural vibrations
    Panda, Jagajyoti
    Chopra, Mudit
    Matsagar, Vasant
    Chakraborty, Souvik
    COMPUTERS & STRUCTURES, 2024, 290
  • [46] DAC-SGD: A Distributed Stochastic Gradient Descent Algorithm Based on Asynchronous Connection
    He, Aijia
    Chen, Zehong
    Li, Weichen
    Li, Xingying
    Li, Hongjun
    Zhao, Xin
    IIP'17: PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATION PROCESSING, 2017,
  • [47] Accelerated Distributed Nesterov Gradient Descent
    Qu, Guannan
    Li, Na
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (06) : 2566 - 2581
  • [48] Bayesian Distributed Stochastic Gradient Descent
    Teng, Michael
    Wood, Frank
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [49] RLC Circuits-Based Distributed Mirror Descent Method
    Yu, Yue
    Acikmese, Behcet
    IEEE CONTROL SYSTEMS LETTERS, 2020, 4 (03): : 548 - 553
  • [50] Distributed Gradient Descent for Functional Learning
    Yu, Zhan
    Fan, Jun
    Shi, Zhongjie
    Zhou, Ding-Xuan
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2024, 70 (09) : 6547 - 6571