Distributed Random Reshuffling Over Networks

被引:7
|
作者
Huang, Kun [1 ,2 ]
Li, Xiao [3 ]
Milzarek, Andre [1 ]
Pu, Shi [1 ]
Qiu, Junwen [1 ,2 ]
机构
[1] Chinese Univ Hong Kong, Shenzhen Res Inst Big Data, Sch Data Sci, Shenzhen 518172, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518129, Peoples R China
[3] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
关键词
Convergence; Linear programming; Signal processing algorithms; Gradient methods; Distributed databases; Big Data; Machine learning algorithms; Distributed optimization; random reshuffling; stochastic gradient methods; STOCHASTIC OPTIMIZATION; LEARNING-BEHAVIOR; CONVERGENCE; CONSENSUS;
D O I
10.1109/TSP.2023.3262181
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we consider distributed optimization problems where n agents, each possessing a local cost function, collaboratively minimize the average of the local cost functions over a connected network. To solve the problem, we propose a distributed random reshuffling (D-RR) algorithm that invokes the random reshuffling (RR) update in each agent. We show that D-RR inherits favorable characteristics of RR for both smooth strongly convex and smooth nonconvex objective functions. In particular, for smooth strongly convex objective functions, D-RR achieves O(1/T-2) rate of convergence (where T counts the epoch number) in terms of the squared distance between the iterate and the global minimizer. When the objective function is assumed to be smooth nonconvex, we show that D-RR drives the squared norm of the gradient to 0 at a rate of O(1/T-2/3). These convergence results match those of centralized RR (up to constant factors) and outperform the distributed stochastic gradient descent (DSGD) algorithm if we run a relatively large number of epochs. Finally, we conduct a set of numerical experiments to illustrate the efficiency of the proposed D-RR method on both strongly convex and nonconvex distributed optimization problems.
引用
收藏
页码:1143 / 1158
页数:16
相关论文
共 50 条
  • [1] Distributed Random Reshuffling Methods with Improved Convergence
    Huang, Kun
    Zhou, Linli
    Pu, Shi
    arXiv, 2023,
  • [2] Distributed Average Consensus over Random Networks
    Alaviani, S. Sh
    Elia, N.
    2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 1854 - 1859
  • [3] Distributed Linear Equations Over Random Networks
    Yi, Peng
    Lei, Jinlong
    Chen, Jie
    Hong, Yiguang
    Shi, Guodong
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (04) : 2607 - 2614
  • [4] Distributed Optimization Over Dependent Random Networks
    Aghajan, Adel
    Touri, Behrouz
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (08) : 4812 - 4826
  • [5] On Distributed Optimization Over Random Independent Networks
    Aghajan, Adel
    Touri, Behrouz
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 4268 - 4273
  • [6] ASYMPTOTIC PERFORMANCE OF DISTRIBUTED DETECTION OVER RANDOM NETWORKS
    Bajovic, Dragana
    Jakovetic, Dusan
    Xavier, Joao
    Sinopoli, Bruno
    Moura, Jose M. F.
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 3008 - 3011
  • [7] Reshuffling scale-free networks: From random to assortative
    Xulvi-Brunet, R
    Sokolov, IM
    PHYSICAL REVIEW E, 2004, 70 (06):
  • [8] Distributed Subgradient Methods for Convex Optimization Over Random Networks
    Lobel, Ilan
    Ozdaglar, Asuman
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2011, 56 (06) : 1291 - 1306
  • [9] Convergence rates for distributed stochastic optimization over random networks
    Jakovetic, Dusan
    Bajovic, Dragana
    Sahu, Anit Kumar
    Kar, Soummya
    2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2018, : 4238 - 4245
  • [10] Convergence Analysis of Distributed Subgradient Methods over Random Networks
    Lobel, Ilan
    Ozdaglar, Asuman
    2008 46TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, VOLS 1-3, 2008, : 353 - +