Harnessing Smoothness to Accelerate Distributed Optimization

被引:402
|
作者
Qu, Guannan [1 ]
Li, Na [1 ]
机构
[1] Harvard Univ, Sch Engn & Appl Sci, Cambridge, MA 02138 USA
来源
关键词
Distributed algorithms; multiagent systems; optimization methods; SUBGRADIENT METHODS; CONSENSUS;
D O I
10.1109/TCNS.2017.2698261
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
There has been a growing effort in studying the distributed optimization problem over a network. The objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. The literature has developed consensus-based distributed (sub) gradient descent (DGD) methods and has shown that they have the same convergence rate O(log t/root t) as the centralized (sub) gradient methods (CGD), when the function is convex but possibly nonsmooth. However, when the function is convex and smooth, under the framework of DGD, it is unclear how to harness the smoothness to obtain a faster convergence rate comparable to CGD's convergence rate. In this paper, we propose a distributed algorithm that, despite using the same amount of communication per iteration as DGD, can effectively harnesses the function smoothness and converge to the optimum with a rate of O(1/t). If the objective function is further strongly convex, our algorithm has a linear convergence rate. Both rates match the convergence rate of CGD. The key step in our algorithm is a novel gradient estimation scheme that uses history information to achieve fast and accurate estimation of the average gradient. To motivate the necessity of history information, we also show that it is impossible for a class of distributed algorithms like DGDto achieve a linear convergence rate without using history information even if the objective function is strongly convex and smooth.
引用
收藏
页码:1245 / 1260
页数:16
相关论文
共 50 条
  • [1] Harnessing Smoothness to Accelerate Distributed Optimization
    Qu, Guannan
    Li, Na
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 159 - 166
  • [2] Distributed Nash Equilibrium Learning for Average Aggregative Games: Harnessing Smoothness to Accelerate the Algorithm
    Pan, Wei
    Xu, Xinli
    Lu, Yu
    Zhang, Weidong
    IEEE SYSTEMS JOURNAL, 2023, 17 (03): : 4855 - 4865
  • [3] Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
    Safaryan, Mher
    Hanzely, Filip
    Richtarik, Peter
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [4] Accelerate Distributed Stochastic Descent for Nonconvex Optimization with Momentum
    Cong, Guojing
    Liu, Tianyi
    2020 IEEE/ACM WORKSHOP ON MACHINE LEARNING IN HIGH PERFORMANCE COMPUTING ENVIRONMENTS (MLHPC 2020) AND WORKSHOP ON ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR SCIENTIFIC APPLICATIONS (AI4S 2020), 2020, : 29 - 39
  • [5] Harnessing Low-Fidelity Data to Accelerate Bayesian Optimization via Posterior Regularization
    Liu, Bin
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP 2020), 2020, : 140 - 146
  • [6] Harnessing the crowd to accelerate molecular medicine research
    Smith, Robert J.
    Merchant, Raina M.
    TRENDS IN MOLECULAR MEDICINE, 2015, 21 (07) : 403 - 405
  • [7] Harnessing the power of patent information to accelerate innovation
    Clark, Kerri L.
    Kowalski, Stanley P.
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2012, 2 (05) : 427 - 435
  • [8] HARNESSING THE COMMUNITY TO ACCELERATE CANCER RESEARCH OUTCOMES
    Butt, Alison J.
    Links, Nathaniel
    Renouf, Carole
    ASIA-PACIFIC JOURNAL OF CLINICAL ONCOLOGY, 2014, 10 : 4 - 4
  • [9] Harnessing venture philanthropy to accelerate medical progress
    Lopez, Juan Carlos
    Suojanen, Christian
    NATURE REVIEWS DRUG DISCOVERY, 2019, 18 (11) : 809 - 810
  • [10] Harnessing Biological Insight to Accelerate Tuberculosis Drug Discovery
    de Wet, Timothy J.
    Warner, Digby F.
    Mizrahi, Valerie
    ACCOUNTS OF CHEMICAL RESEARCH, 2019, 52 (08) : 2340 - 2348