Distributed zeroth-order optimization: Convergence rates that match centralized counterpart

被引:4
|
作者
Yuan, Deming [1 ]
Wang, Lei [2 ]
Proutiere, Alexandre [3 ]
Shi, Guodong [4 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Automat, Nanjing 210094, Jiangsu, Peoples R China
[2] Zhejiang Univ, Coll Control Sci & Engn, State Key Lab Ind Control Technol, Hangzhou 310058, Peoples R China
[3] KTH Royal Inst Technol, Dept Automat Control, SE-10044 Stockholm, Sweden
[4] Univ Sydney, Sydney Inst Robot & Intelligent Syst, Australian Ctr Field Robot, Sydney, NSW 2006, Australia
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
Distributed optimization; Zeroth-order optimization; Multi-stage optimization algorithm; Optimal convergence rate; MULTIAGENT OPTIMIZATION; CONVEX-OPTIMIZATION; SUBGRADIENT METHODS; ALGORITHM; POWER;
D O I
10.1016/j.automatica.2023.111328
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed replacing gradients by the estimates. For optimization over large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed algorithms. In this paper, we aim at understanding the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, and focus on multi -agent systems with time-varying communication networks. We establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost due to the distributed nature of algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.(c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization
    Liu, Sijia
    Kailkhura, Bhavya
    Chen, Pin-Yu
    Ting, Paishun
    Chang, Shiyu
    Amini, Lisa
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [32] Stochastic Zeroth-Order Optimization under Nonstationarity and Nonconvexity
    Roy, Abhishek
    Balasubramanian, Krishnakumar
    Ghadimi, Saeed
    Mohapatra, Prasant
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [33] Zeroth-Order Optimization for Composite Problems with Functional Constraints
    Li, Zichong
    Chen, Pin-Yu
    Liu, Sijia
    Lu, Songtao
    Xu, Yangyang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7453 - 7461
  • [34] Safe Zeroth-Order Optimization Using Linear Programs
    Guo, Baiwei
    Wang, Yang
    Jiang, Yuning
    Kamgarpour, Maryam
    Ferrari-Trecate, Giancarlo
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 556 - 561
  • [35] An Almost Sure Convergence Analysis of Zeroth-Order Mirror Descent Algorithm
    Paul, Anik Kumar
    Mahindrakar, Arun D.
    Kalaimani, Rachel K.
    2023 AMERICAN CONTROL CONFERENCE, ACC, 2023, : 855 - 860
  • [36] Safe zeroth-order optimization using quadratic local approximations
    Guo, Baiwei
    Jiang, Yuning
    Ferrari-Trecate, Giancarlo
    Kamgarpour, Maryam
    AUTOMATICA, 2025, 174
  • [37] Sequential stochastic blackbox optimization with zeroth-order gradient estimators
    Audet, Charles
    Bigeon, Jean
    Couderc, Romain
    Kokkolaras, Michael
    AIMS MATHEMATICS, 2023, 8 (11): : 25922 - 25956
  • [38] ZEROTH-ORDER STOCHASTIC PROJECTED GRADIENT DESCENT FOR NONCONVEX OPTIMIZATION
    Liu, Sijia
    Li, Xingguo
    Chen, Pin-Yu
    Haupt, Jarvis
    Amini, Lisa
    2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1179 - 1183
  • [39] SMALL ERRORS IN RANDOM ZEROTH-ORDER OPTIMIZATION ARE IMAGINARY\ast
    Jongeneel, Wouter
    Yue, Man-Chung
    Kuhn, Daniel
    SIAM JOURNAL ON OPTIMIZATION, 2024, 34 (03) : 2638 - 2670
  • [40] Zeroth-Order Methods for Nondifferentiable, Nonconvex, and Hierarchical Federated Optimization
    Qiu, Yuyang
    Shanbhag, Uday V.
    Yousefian, Farzad
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,