Distributed Mirror Descent for Stochastic Learning over Rate-limited Networks

被引:0
|
作者
Nokleby, Matthew [1 ]
Bajwa, Waheed U. [2 ]
机构
[1] Wayne State Univ, Detroit, MI 48202 USA
[2] Rutgers State Univ, Piscataway, NJ USA
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We present and analyze two algorithms-termed distributed stochastic approximation mirror descent (D-SAMD) and accelerated distributed stochastic approximation mirror descent (AD-SAMD) for distributed, stochastic optimization from high-rate data streams over rate-limited networks. Devices contend with fast streaming rates by mini-batching samples in the data stream, and they collaborate via distributed consensus to compute variance-reduced averages of distributed subgradients. This induces a trade-off: Mini-batching slows down the effective streaming rate, but may also slow down convergence. We present two theoretical contributions that characterize this trade-off: (i) bounds on the convergence rates of D-SAMD and AD-SAMD, and (ii) sufficient conditions for order-optimum convergence of D-SAMD and AD-SAMD, in terms of the network size/topology and the ratio of the data streaming and communication rates. We find that AD-SAMD achieves order-optimum convergence in a larger regime than D-SAMD. We demonstrate the effectiveness of the proposed algorithms using numerical experiments.
引用
收藏
页数:5
相关论文
共 50 条
  • [21] Distributed No-Regret Learning for Stochastic Aggregative Games over Networks
    Lei, Jinlong
    Yi, Peng
    Li, Li
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 7512 - 7519
  • [22] A Distributed Luenberger Observer for Linear State Feedback Systems With Quantized and Rate-Limited Communications
    Rego, Francisco Castro
    Pu, Ye
    Alessandretti, Andrea
    Aguiar, A. Pedro
    Pascoal, Antonio M.
    Jones, Colin N.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (09) : 3922 - 3937
  • [23] Stochastic Gradient Descent with Polyak's Learning Rate
    Prazeres, Mariana
    Oberman, Adam M.
    JOURNAL OF SCIENTIFIC COMPUTING, 2021, 89 (01)
  • [24] Stochastic Gradient Descent with Polyak’s Learning Rate
    Mariana Prazeres
    Adam M. Oberman
    Journal of Scientific Computing, 2021, 89
  • [25] An automatic learning rate decay strategy for stochastic gradient descent optimization methods in neural networks
    Wang, Kang
    Dou, Yong
    Sun, Tao
    Qiao, Peng
    Wen, Dong
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (10) : 7334 - 7355
  • [26] Channel Quality Dependent Rate-limited Scheduling Algorithm for IEEE 802.16 Wireless Networks
    Zhao, Cuicui
    Hu, Jinlong
    Zhou, Jihua
    Shi, Jinglin
    Dutkiewicz, Eryk
    2009 WRI INTERNATIONAL CONFERENCE ON COMMUNICATIONS AND MOBILE COMPUTING: CMC 2009, VOL 2, 2009, : 402 - +
  • [27] Distributed mirror descent method for saddle point problems over directed graphs
    Li, Jueyou
    Chen, Guo
    Dong, Zhaoyang
    Wu, Zhiyou
    Yao, Minghai
    COMPLEXITY, 2016, 21 (S2) : 178 - 190
  • [28] Robust decentralized stochastic gradient descent over unstable networks
    Zheng, Yanwei
    Zhang, Liangxu
    Chen, Shuzhen
    Zhang, Xiao
    Cai, Zhipeng
    Cheng, Xiuzhen
    COMPUTER COMMUNICATIONS, 2023, 203 : 163 - 179
  • [29] Distributed stochastic gradient descent for link prediction in signed social networks
    Zhang, Han
    Wu, Gang
    Ling, Qing
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2019, 2019 (1)
  • [30] Distributed stochastic gradient descent for link prediction in signed social networks
    Han Zhang
    Gang Wu
    Qing Ling
    EURASIP Journal on Advances in Signal Processing, 2019