Harnessing Smoothness to Accelerate Distributed Optimization

被引:402
|
作者
Qu, Guannan [1 ]
Li, Na [1 ]
机构
[1] Harvard Univ, Sch Engn & Appl Sci, Cambridge, MA 02138 USA
来源
关键词
Distributed algorithms; multiagent systems; optimization methods; SUBGRADIENT METHODS; CONSENSUS;
D O I
10.1109/TCNS.2017.2698261
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
There has been a growing effort in studying the distributed optimization problem over a network. The objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. The literature has developed consensus-based distributed (sub) gradient descent (DGD) methods and has shown that they have the same convergence rate O(log t/root t) as the centralized (sub) gradient methods (CGD), when the function is convex but possibly nonsmooth. However, when the function is convex and smooth, under the framework of DGD, it is unclear how to harness the smoothness to obtain a faster convergence rate comparable to CGD's convergence rate. In this paper, we propose a distributed algorithm that, despite using the same amount of communication per iteration as DGD, can effectively harnesses the function smoothness and converge to the optimum with a rate of O(1/t). If the objective function is further strongly convex, our algorithm has a linear convergence rate. Both rates match the convergence rate of CGD. The key step in our algorithm is a novel gradient estimation scheme that uses history information to achieve fast and accurate estimation of the average gradient. To motivate the necessity of history information, we also show that it is impossible for a class of distributed algorithms like DGDto achieve a linear convergence rate without using history information even if the objective function is strongly convex and smooth.
引用
收藏
页码:1245 / 1260
页数:16
相关论文
共 50 条
  • [21] Harnessing distributed musical expertise through edublogging
    Chong, Eddy K. M.
    AUSTRALASIAN JOURNAL OF EDUCATIONAL TECHNOLOGY, 2008, 24 (02) : 181 - 194
  • [23] On the smoothness of an objective function in quantile optimization problems
    Kibzun, AI
    Tretyakov, GL
    AUTOMATION AND REMOTE CONTROL, 1997, 58 (09) : 1459 - 1468
  • [24] Harnessing the Power of C-H Functionalization Chemistry to Accelerate Drug Discovery
    Li, Bing
    Tyagarajan, Sriram
    Dykstra, Kevin D.
    Cernak, Tim
    Vachal, Petr
    Krska, Shane W.
    SYNLETT, 2024, : 862 - 876
  • [25] Harnessing the High Interfacial Electric Fields on Water Microdroplets to Accelerate Menshutkin Reactions
    Song, Zhexuan
    Liang, Chiyu
    Gong, Ke
    Zhao, Supin
    Yuan, Xu
    Zhang, Xinxing
    Xie, Jing
    JOURNAL OF THE AMERICAN CHEMICAL SOCIETY, 2023, 145 (48) : 26003 - 26008
  • [26] On the Adaptive Smoothness Functional Optimization of Quadrilateral Meshes
    Ivanyi, P.
    PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON ENGINEERING COMPUTATIONAL TECHNOLOGY, 2010, 94
  • [27] Dynamic Model Evaluation to Accelerate Distributed Machine Learning
    Caton, Simon
    Venugopal, Srikumar
    Bhushan, Shashi T. N.
    Velamuri, Vidya Sankar
    Katrinis, Kostas
    2018 IEEE INTERNATIONAL CONGRESS ON BIG DATA (IEEE BIGDATA CONGRESS), 2018, : 150 - 157
  • [28] AN EXAMINATION OF DISTRIBUTED LAG MODEL COEFFICIENTS ESTIMATED WITH SMOOTHNESS PRIORS
    THURMAN, SS
    SWAMY, PAVB
    MEHTA, JS
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 1986, 15 (06) : 1723 - 1749
  • [29] Distributed compressed sensing to accelerate cine cardiac MRI
    Jafar Zamani
    Abbas N Moghaddam
    Hamidreza S Rad
    Journal of Cardiovascular Magnetic Resonance, 17 (Suppl 1)
  • [30] Harnessing Cloud Technologies for a Virtualized Distributed Computing Infrastructure
    di Costanzo, Alexandre
    de Assuncao, Marcos Dias
    Buyya, Rajkumar
    IEEE INTERNET COMPUTING, 2009, 13 (05) : 24 - 33