Distributed Mirror Descent for Stochastic Learning over Rate-limited Networks

被引:0
|
作者
Nokleby, Matthew [1 ]
Bajwa, Waheed U. [2 ]
机构
[1] Wayne State Univ, Detroit, MI 48202 USA
[2] Rutgers State Univ, Piscataway, NJ USA
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We present and analyze two algorithms-termed distributed stochastic approximation mirror descent (D-SAMD) and accelerated distributed stochastic approximation mirror descent (AD-SAMD) for distributed, stochastic optimization from high-rate data streams over rate-limited networks. Devices contend with fast streaming rates by mini-batching samples in the data stream, and they collaborate via distributed consensus to compute variance-reduced averages of distributed subgradients. This induces a trade-off: Mini-batching slows down the effective streaming rate, but may also slow down convergence. We present two theoretical contributions that characterize this trade-off: (i) bounds on the convergence rates of D-SAMD and AD-SAMD, and (ii) sufficient conditions for order-optimum convergence of D-SAMD and AD-SAMD, in terms of the network size/topology and the ratio of the data streaming and communication rates. We find that AD-SAMD achieves order-optimum convergence in a larger regime than D-SAMD. We demonstrate the effectiveness of the proposed algorithms using numerical experiments.
引用
收藏
页数:5
相关论文
共 50 条
  • [41] Distributed Online Optimization over a Heterogeneous Network with Any-Batch Mirror Descent
    Eshraghi, Nima
    Liang, Ben
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [42] Distributed mirror descent algorithm over unbalanced digraphs based on gradient weighting technique
    Shi, Chong-Xiao
    Yang, Guang-Hong
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2023, 360 (14): : 10656 - 10680
  • [43] Distributed Online Optimization over a Heterogeneous Network with Any-Batch Mirror Descent
    Eshraghi, Nima
    Liang, Ben
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [44] Distributed Learning over Unreliable Networks
    Yu, Chen
    Tang, Hanlin
    Renggli, Cedric
    Kassing, Simon
    Singla, Ankit
    Alistarh, Dan
    Zhang, Ce
    Liu, Ji
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [45] Distributed Clustering and Learning Over Networks
    Zhao, Xiaochuan
    Sayed, Ali H.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (13) : 3285 - 3300
  • [46] Convergence Rate of Distributed ADMM Over Networks
    Makhdoumi, Ali
    Ozdaglar, Asuman
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (10) : 5082 - 5095
  • [47] An efficient, distributed stochastic gradient descent algorithm for deep-learning applications
    Cong, Guojing
    Bhardwaj, Onkar
    Feng, Minwei
    2017 46TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING (ICPP), 2017, : 11 - 20
  • [48] Coding Schemes With Rate-Limited Feedback That Improve Over the No Feedback Capacity for a Large Class of Broadcast Channels
    Wu, Youlong
    Wigger, Michele
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2016, 62 (04) : 2009 - 2033
  • [49] Distributed stochastic configuration networks with cooperative learning paradigm
    Ai, Wu
    Wang, Dianhui
    INFORMATION SCIENCES, 2020, 540 : 1 - 16
  • [50] HIERARCHICAL TRAINING FOR DISTRIBUTED DEEP LEARNING BASED ON MULTIMEDIA DATA OVER BAND-LIMITED NETWORKS
    Qi, Siyu
    Chamain, Lahiru D.
    Ding, Zhi
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2871 - 2875