Distributed Mirror Descent for Stochastic Learning over Rate-limited Networks

被引:0
|
作者
Nokleby, Matthew [1 ]
Bajwa, Waheed U. [2 ]
机构
[1] Wayne State Univ, Detroit, MI 48202 USA
[2] Rutgers State Univ, Piscataway, NJ USA
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We present and analyze two algorithms-termed distributed stochastic approximation mirror descent (D-SAMD) and accelerated distributed stochastic approximation mirror descent (AD-SAMD) for distributed, stochastic optimization from high-rate data streams over rate-limited networks. Devices contend with fast streaming rates by mini-batching samples in the data stream, and they collaborate via distributed consensus to compute variance-reduced averages of distributed subgradients. This induces a trade-off: Mini-batching slows down the effective streaming rate, but may also slow down convergence. We present two theoretical contributions that characterize this trade-off: (i) bounds on the convergence rates of D-SAMD and AD-SAMD, and (ii) sufficient conditions for order-optimum convergence of D-SAMD and AD-SAMD, in terms of the network size/topology and the ratio of the data streaming and communication rates. We find that AD-SAMD achieves order-optimum convergence in a larger regime than D-SAMD. We demonstrate the effectiveness of the proposed algorithms using numerical experiments.
引用
收藏
页数:5
相关论文
共 50 条
  • [11] Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air
    Amiri, Mohammad Mohammadi
    Gunduz, Deniz
    2019 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2019, : 1432 - 1436
  • [12] Coding Schemes for Discrete Memoryless Multicast Networks with Rate-limited Feedback
    Wu, Youlong
    2015 IEEE INFORMATION THEORY WORKSHOP - FALL (ITW), 2015, : 197 - 201
  • [13] Stochastic mirror descent method for distributed multi-agent optimization
    Jueyou Li
    Guoquan Li
    Zhiyou Wu
    Changzhi Wu
    Optimization Letters, 2018, 12 : 1179 - 1197
  • [14] Stochastic mirror descent method for distributed multi-agent optimization
    Li, Jueyou
    Li, Guoquan
    Wu, Zhiyou
    Wu, Changzhi
    OPTIMIZATION LETTERS, 2018, 12 (06) : 1179 - 1197
  • [15] Event-Triggered Distributed Stochastic Mirror Descent for Convex Optimization
    Xiong, Menghui
    Zhang, Baoyong
    Ho, Daniel W. C.
    Yuan, Deming
    Xu, Shengyuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (09) : 6480 - 6491
  • [16] Distributed mirror descent method with operator extrapolation for stochastic aggregative games
    Wang, Tongyu
    Yi, Peng
    Chen, Jie
    AUTOMATICA, 2024, 159
  • [17] Gossip-based distributed stochastic mirror descent for constrained optimization
    Fang, Xianju
    Zhang, Baoyong
    Yuan, Deming
    NEURAL NETWORKS, 2024, 175
  • [18] Packetized Predictive Control for Rate-Limited Networks via Sparse Representation
    Nagahara, Masaaki
    Quevedo, Daniel E.
    Ostergaard, Jan
    2012 IEEE 51ST ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2012, : 1362 - 1367
  • [19] Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach
    Zhang, Junshan
    Li, Na
    Dedeoglu, Mehmet
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [20] Towards Model-Free LQR Control over Rate-Limited Channels
    Mitra, Aritra
    Ye, Lintao
    Gupta, Vijay
    6TH ANNUAL LEARNING FOR DYNAMICS & CONTROL CONFERENCE, 2024, 242 : 1253 - 1265