Zeroth-order algorithms for stochastic distributed nonconvex optimization

被引:10
|
作者
Yi, Xinlei [1 ]
Zhang, Shengjun [2 ]
Yang, Tao [3 ]
Johansson, Karl H. [1 ]
机构
[1] KTH Royal Inst Technol & Digital Futures, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
[2] Univ North Texas, Dept Elect Engn, Denton, TX 76203 USA
[3] Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110819, Peoples R China
基金
中国国家自然科学基金; 瑞典研究理事会;
关键词
Distributed nonconvex optimization; Gradient-free; Linear speedup; Polyak-Lojasiewicz condition; Stochastic optimization; MULTIAGENT OPTIMIZATION; CONVEX-OPTIMIZATION;
D O I
10.1016/j.automatica.2022.110353
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider a stochastic distributed nonconvex optimization problem with the cost function being distributed over n agents having access only to zeroth-order (ZO) information of the cost. This problem has various machine learning applications. As a solution, we propose two distributed ZO algorithms, in which at each iteration each agent samples the local stochastic ZO oracle at two points with a time-varying smoothing parameter. We show that the proposed algorithms achieve the linear speedup convergence rate O(root p/(nT)) for smooth cost functions under the state-dependent variance assumptions which are more general than the commonly used bounded variance and Lipschitz assumptions, and O(p/(nT)) convergence rate when the global cost function additionally satisfies the Polyak-Lojasiewicz (P-L) condition, where p and T are the dimension of the decision variable and the total number of iterations, respectively. To the best of our knowledge, this is the first linear speedup result for distributed ZO algorithms. It consequently enables systematic processing performance improvements by adding more agents. We also show that the proposed algorithms converge linearly under the relatively bounded second moment assumptions and the P-L condition. We demonstrate through numerical experiments the efficiency of our algorithms on generating adversarial examples from deep neural networks in comparison with baseline and recently proposed centralized and distributed ZO algorithms. (C) 2022 Elsevier Ltd. All rights reserved.
引用
下载
收藏
页数:11
相关论文
共 50 条
  • [41] ZEROTH-ORDER STOCHASTIC COMPOSITIONAL ALGORITHMS FOR RISK-AWARE LEARNING
    Kalogerias, Dionysios S.
    Powell, Warren B.
    SIAM JOURNAL ON OPTIMIZATION, 2022, 32 (02) : 386 - 416
  • [42] Zeroth-order single-loop algorithms for nonconvex-linear minimax problems
    Shen, Jingjing
    Wang, Ziqi
    Xu, Zi
    JOURNAL OF GLOBAL OPTIMIZATION, 2022,
  • [43] Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning
    Fang, Wenzhi
    Yu, Ziyi
    Jiang, Yuning
    Shi, Yuanming
    Jones, Colin N.
    Zhou, Yong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 : 5058 - 5073
  • [44] Distributed zeroth-order optimization: Convergence rates that match centralized counterpart
    Yuan, Deming
    Wang, Lei
    Proutiere, Alexandre
    Shi, Guodong
    AUTOMATICA, 2024, 159
  • [45] Distributed Consensus Optimization under Zeroth-Order Oracles and Uniform Quantization
    Ding, Jingjing
    Yuan, Deming
    Jiang, Guo-Ping
    2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 6103 - 6107
  • [46] Certified Multifidelity Zeroth-Order Optimization
    de Montbrun, Etienne
    Gerchinovitz, Sebastien
    SIAM-ASA Journal on Uncertainty Quantification, 2024, 12 (04): : 1135 - 1164
  • [47] Efficient zeroth-order proximal stochastic method for nonconvex nonsmooth black-box problems
    Kazemi, Ehsan
    Wang, Liqiang
    MACHINE LEARNING, 2023, 113 (01) : 97 - 120
  • [48] Game Theoretic Stochastic Energy Coordination under A Distributed Zeroth-order Algorithm
    Chen, Yuwen
    Zou, Suli
    Lygeros, John
    IFAC PAPERSONLINE, 2020, 53 (02): : 4070 - 4075
  • [49] Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
    Raghu Bollapragada
    Stefan M. Wild
    Mathematical Programming Computation, 2023, 15 : 327 - 364
  • [50] Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
    Bollapragada, Raghu
    Wild, Stefan M. M.
    MATHEMATICAL PROGRAMMING COMPUTATION, 2023, 15 (02) : 327 - 364