Communication-Efficient Regret-Optimal Distributed Online Convex Optimization

被引:0
|
作者
Liu, Jiandong [1 ]
Zhang, Lan [1 ]
He, Fengxiang [2 ]
Zhang, Chi [1 ]
Jiang, Shanyang [1 ]
Li, Xiang-Yang [1 ]
机构
[1] Univ Sci & Technol China, LINKE Lab, Hefei 230026, Peoples R China
[2] Univ Edinburgh, Artificial Intelligence & its Applicat Inst, Sch Informat, Edinburgh EH8 9AB, Scotland
基金
国家重点研发计划;
关键词
Complexity theory; Stochastic processes; Robots; Robot kinematics; Costs; Convex functions; Loss measurement; Communication complexity; distributed online learning; convex optimization;
D O I
10.1109/TPDS.2024.3403883
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Online convex optimization in distributed systems has shown great promise in collaboratively learning on data streams with massive learners, such as in collaborative coordination inrobot and IoT networks. When implemented in communication-constrained networks like robot and IoT networks, two critical yet distinct objectives in distributed online convex optimization(DOCO) are minimizing the overall regret and the communication cost. Achieving both objectives simultaneously is challenging, especially when the number of learners n and learning time Tare prohibitively large. To address this challenge, we propose novel algorithms in typical adversarial and stochastic settings. Our algorithms significantly reduce the communication complex-ity of the algorithms with the state-of-the-art regret by a factor of O(n(2)) and O(root nT)in adversarial and stochastic settings, respectively. We are the first to achieve nearly optimal regret and communication complexity simultaneously up to polylogarith-mic factors. We validate our algorithms through experiments on real-world datasets in classification tasks. Our algorithms with appropriate parameters can achieve90%similar to 99%communicationsaving with close accuracy over existing methods in most cases. The code is available at https://github.com/GGBOND121382/Communication-Efficient_Regret-Optimal_DOCO.
引用
下载
收藏
页码:2270 / 2283
页数:14
相关论文
共 50 条
  • [31] Communication-efficient Distributed SGD with Sketching
    Ivkin, Nikita
    Rothchild, Daniel
    Ullah, Enayat
    Braverman, Vladimir
    Stoica, Ion
    Arora, Raman
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [32] Communication-Efficient Distributed Learning: An Overview
    Cao, Xuanyu
    Basar, Tamer
    Diggavi, Suhas
    Eldar, Yonina C.
    Letaief, Khaled B.
    Poor, H. Vincent
    Zhang, Junshan
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (04) : 851 - 873
  • [33] Communication-Efficient Distributed Statistical Inference
    Jordan, Michael I.
    Lee, Jason D.
    Yang, Yun
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2019, 114 (526) : 668 - 681
  • [34] Communication-efficient distributed EM algorithm
    Liu, Xirui
    Wu, Mixia
    Xu, Liwen
    STATISTICAL PAPERS, 2024, : 5575 - 5592
  • [35] Logarithmic regret algorithms for online convex optimization
    Hazan, Elad
    Agarwal, Amit
    Kale, Satyen
    MACHINE LEARNING, 2007, 69 (2-3) : 169 - 192
  • [36] Logarithmic regret algorithms for online convex optimization
    Elad Hazan
    Amit Agarwal
    Satyen Kale
    Machine Learning, 2007, 69 : 169 - 192
  • [37] Logarithmic regret algorithms for online convex optimization
    Hazan, Elad
    Kalai, Adam
    Kale, Satyen
    Agarwal, Amit
    LEARNING THEORY, PROCEEDINGS, 2006, 4005 : 499 - 513
  • [38] Minimizing Regret in Unconstrained Online Convex Optimization
    Tatarenko, Tatiana
    Kamgarpour, Maryam
    2018 EUROPEAN CONTROL CONFERENCE (ECC), 2018, : 143 - 148
  • [39] Adaptive Multi-Hierarchical signSGD for Communication-Efficient Distributed Optimization
    Yang, Haibo
    Zhang, Xin
    Fang, Minghong
    Liu, Jia
    PROCEEDINGS OF THE 21ST IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (IEEE SPAWC2020), 2020,
  • [40] Communication-Efficient Distributed Optimization of Self-concordant Empirical Loss
    Zhang, Yuchen
    Xiao, Lin
    LARGE-SCALE AND DISTRIBUTED OPTIMIZATION, 2018, 2227 : 289 - 341