Zeroth-order algorithms for stochastic distributed nonconvex optimization

被引:11
|
作者
Yi, Xinlei [1 ]
Zhang, Shengjun [2 ]
Yang, Tao [3 ]
Johansson, Karl H. [1 ]
机构
[1] KTH Royal Inst Technol & Digital Futures, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
[2] Univ North Texas, Dept Elect Engn, Denton, TX 76203 USA
[3] Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110819, Peoples R China
基金
中国国家自然科学基金; 瑞典研究理事会;
关键词
Distributed nonconvex optimization; Gradient-free; Linear speedup; Polyak-Lojasiewicz condition; Stochastic optimization; MULTIAGENT OPTIMIZATION; CONVEX-OPTIMIZATION;
D O I
10.1016/j.automatica.2022.110353
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider a stochastic distributed nonconvex optimization problem with the cost function being distributed over n agents having access only to zeroth-order (ZO) information of the cost. This problem has various machine learning applications. As a solution, we propose two distributed ZO algorithms, in which at each iteration each agent samples the local stochastic ZO oracle at two points with a time-varying smoothing parameter. We show that the proposed algorithms achieve the linear speedup convergence rate O(root p/(nT)) for smooth cost functions under the state-dependent variance assumptions which are more general than the commonly used bounded variance and Lipschitz assumptions, and O(p/(nT)) convergence rate when the global cost function additionally satisfies the Polyak-Lojasiewicz (P-L) condition, where p and T are the dimension of the decision variable and the total number of iterations, respectively. To the best of our knowledge, this is the first linear speedup result for distributed ZO algorithms. It consequently enables systematic processing performance improvements by adding more agents. We also show that the proposed algorithms converge linearly under the relatively bounded second moment assumptions and the P-L condition. We demonstrate through numerical experiments the efficiency of our algorithms on generating adversarial examples from deep neural networks in comparison with baseline and recently proposed centralized and distributed ZO algorithms. (C) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization
    Liu, Sijia
    Kailkhura, Bhavya
    Chen, Pin-Yu
    Ting, Paishun
    Chang, Shiyu
    Amini, Lisa
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [2] ZEROTH-ORDER STOCHASTIC PROJECTED GRADIENT DESCENT FOR NONCONVEX OPTIMIZATION
    Liu, Sijia
    Li, Xingguo
    Chen, Pin-Yu
    Haupt, Jarvis
    Amini, Lisa
    [J]. 2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1179 - 1183
  • [3] Convergence Analysis of Nonconvex Distributed Stochastic Zeroth-order Coordinate Method
    Zhang, Shengjun
    Dong, Yunlong
    Xie, Dong
    Yao, Lisha
    Bailey, Colleen P.
    Fu, Shengli
    [J]. 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 1180 - 1185
  • [4] Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization
    Ji, Kaiyi
    Wang, Zhe
    Zhou, Yi
    Liang, Yingbin
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] A zeroth-order algorithm for distributed optimization with stochastic stripe observations
    Yinghui Wang
    Xianlin Zeng
    Wenxiao Zhao
    Yiguang Hong
    [J]. Science China Information Sciences, 2023, 66
  • [6] A zeroth-order algorithm for distributed optimization with stochastic stripe observations
    Yinghui WANG
    Xianlin ZENG
    Wenxiao ZHAO
    Yiguang HONG
    [J]. Science China(Information Sciences), 2023, 66 (09) : 297 - 298
  • [7] A zeroth-order algorithm for distributed optimization with stochastic stripe observations
    Wang, Yinghui
    Zeng, Xianlin
    Zhao, Wenxiao
    Hong, Yiguang
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2023, 66 (09)
  • [8] Linear Convergence of First- and Zeroth-Order Primal-Dual Algorithms for Distributed Nonconvex Optimization
    Yi, Xinlei
    Zhang, Shengjun
    Yang, Tao
    Chai, Tianyou
    Johansson, Karl H.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (08) : 4194 - 4201
  • [9] Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization
    Huang, Feihu
    Gao, Shangqian
    Chen, Songcan
    Huang, Heng
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2549 - 2555
  • [10] Fast automatic step size selection for zeroth-order nonconvex stochastic optimization
    Yang, Zhuang
    [J]. Expert Systems with Applications, 2021, 174