Distributed Random Reshuffling Over Networks

被引:7
|
作者
Huang, Kun [1 ,2 ]
Li, Xiao [3 ]
Milzarek, Andre [1 ]
Pu, Shi [1 ]
Qiu, Junwen [1 ,2 ]
机构
[1] Chinese Univ Hong Kong, Shenzhen Res Inst Big Data, Sch Data Sci, Shenzhen 518172, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518129, Peoples R China
[3] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
关键词
Convergence; Linear programming; Signal processing algorithms; Gradient methods; Distributed databases; Big Data; Machine learning algorithms; Distributed optimization; random reshuffling; stochastic gradient methods; STOCHASTIC OPTIMIZATION; LEARNING-BEHAVIOR; CONVERGENCE; CONSENSUS;
D O I
10.1109/TSP.2023.3262181
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we consider distributed optimization problems where n agents, each possessing a local cost function, collaboratively minimize the average of the local cost functions over a connected network. To solve the problem, we propose a distributed random reshuffling (D-RR) algorithm that invokes the random reshuffling (RR) update in each agent. We show that D-RR inherits favorable characteristics of RR for both smooth strongly convex and smooth nonconvex objective functions. In particular, for smooth strongly convex objective functions, D-RR achieves O(1/T-2) rate of convergence (where T counts the epoch number) in terms of the squared distance between the iterate and the global minimizer. When the objective function is assumed to be smooth nonconvex, we show that D-RR drives the squared norm of the gradient to 0 at a rate of O(1/T-2/3). These convergence results match those of centralized RR (up to constant factors) and outperform the distributed stochastic gradient descent (DSGD) algorithm if we run a relatively large number of epochs. Finally, we conduct a set of numerical experiments to illustrate the efficiency of the proposed D-RR method on both strongly convex and nonconvex distributed optimization problems.
引用
收藏
页码:1143 / 1158
页数:16
相关论文
共 50 条
  • [41] Efficient random walk sampling in distributed networks
    Das Sarma, Atish
    Molla, Anisur Rahaman
    Pandurangan, Gopal
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2015, 77 : 84 - 94
  • [42] Dynamic distributed scheduling in random access networks
    Stolyar, Alexander L.
    JOURNAL OF APPLIED PROBABILITY, 2008, 45 (02) : 297 - 313
  • [43] Random Distributed Detection for Wireless Sensor Networks
    Liang, Xiao
    Li, Wei
    Gulliver, T. Aaron
    2008 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS, VOLS 1-3, 2008, : 935 - 938
  • [44] Averaging Over General Random Networks
    Cai, Kai
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2012, 57 (12) : 3186 - 3191
  • [45] Interval consensus over random networks
    Fu, Weiming
    Qin, Jiahu
    Wu, Junfeng
    Zheng, Wei Xing
    Kang, Yu
    AUTOMATICA, 2020, 111
  • [46] Distributed Learning over Unreliable Networks
    Yu, Chen
    Tang, Hanlin
    Renggli, Cedric
    Kassing, Simon
    Singla, Ankit
    Alistarh, Dan
    Zhang, Ce
    Liu, Ji
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [47] DISTRIBUTED TENSOR COMPLETION OVER NETWORKS
    Battiloro, Claudio
    Di Lorenzo, Paolo
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8599 - 8603
  • [48] Distributed embodied evolution over networks
    Yaman, Anil
    Iacca, Giovanni
    APPLIED SOFT COMPUTING, 2021, 101
  • [49] Distributed Nonconvex Optimization over Networks
    Di Lorenzo, Paolo
    Scutari, Gesualdo
    2015 IEEE 6TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), 2015, : 229 - 232
  • [50] Distributed computing over optical networks
    Wei Guo
    Jin, Yaohui
    Sun, Weiqiang
    Hu, Weisheng
    Lin, Xinhua
    Wu, Niin-You
    Hong Liu
    Fu, San
    Jun Yuan
    2008 CONFERENCE ON OPTICAL FIBER COMMUNICATION/NATIONAL FIBER OPTIC ENGINEERS CONFERENCE, VOLS 1-8, 2008, : 2678 - +