Asynchronous Zeroth-Order Distributed Optimization with Residual Feedback

被引:2
|
作者
Shen, Yi [1 ]
Zhang, Yan [1 ]
Nivison, Scott [2 ]
Bell, Zachary, I [2 ]
Zavlanos, Michael M. [1 ]
机构
[1] Duke Univ, Dept Mech Engn & Mat Sci, Durham, NC 27706 USA
[2] Air Force Res Lab, Eglin AFB, FL USA
关键词
D O I
10.1109/CDC45484.2021.9683470
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider a zeroth-order distributed optimization problem, where the global objective function is a black-box function and, as such, its gradient information is inaccessible to the local agents. Instead, the local agents can only use the values of the objective function to estimate the gradient and update their local decision variables. In this paper, we also assume that these updates are done asynchronously. To solve this problem, we propose an asynchronous zeroth-order distributed optimization method that relies on a one-point residual feedback to estimate the unknown gradient. We show that this estimator is unbiased under asynchronous updating, and theoretically analyze the convergence of the proposed method. We also present numerical experiments that demonstrate that our method outperforms two-point methods under asynchronous updating. To the best of our knowledge, this is the first asynchronous zeroth-order distributed optimization method that is also supported by theoretical guarantees.
引用
收藏
页码:3349 / 3354
页数:6
相关论文
共 50 条
  • [41] SMALL ERRORS IN RANDOM ZEROTH-ORDER OPTIMIZATION ARE IMAGINARY\ast
    Jongeneel, Wouter
    Yue, Man-Chung
    Kuhn, Daniel
    SIAM JOURNAL ON OPTIMIZATION, 2024, 34 (03) : 2638 - 2670
  • [42] Zeroth-Order Methods for Nondifferentiable, Nonconvex, and Hierarchical Federated Optimization
    Qiu, Yuyang
    Shanbhag, Uday V.
    Yousefian, Farzad
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [43] A Generic Approach for Accelerating Stochastic Zeroth-Order Convex Optimization
    Yu, Xiaotian
    King, Irwin
    Lyu, Michael R.
    Yang, Tianbao
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3040 - 3046
  • [44] On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms
    Cheng, Shuyu
    Wu, Guoqiang
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [45] Zeroth-Order Learning in Continuous Games via Residual Pseudogradient Estimates
    Huang, Yuanhanqing
    Hu, Jianghai
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2025, 70 (04) : 2258 - 2273
  • [46] Automatic controller tuning using a zeroth-order optimization algorithm
    Zalkind, Daniel S.
    Dall'Anese, Emiliano
    Pao, Lucy Y.
    WIND ENERGY SCIENCE, 2020, 5 (04) : 1579 - 1600
  • [47] Zeroth-Order Optimization for Varactor-Tuned Matching Network
    Pirrone, Michelle
    Dall'Anese, Emiliano
    Barton, Taylor
    2022 IEEE/MTT-S INTERNATIONAL MICROWAVE SYMPOSIUM (IMS 2022), 2022, : 502 - 505
  • [48] ZONE: Zeroth-Order Nonconvex Multiagent Optimization Over Networks
    Hajinezhad, Davood
    Hong, Mingyi
    Garcia, Alfredo
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (10) : 3995 - 4010
  • [49] Lazy Queries Can Reduce Variance in Zeroth-Order Optimization
    Xiao, Quan
    Ling, Qing
    Chen, Tianyi
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 3695 - 3709
  • [50] A Parallel Zeroth-Order Framework for Efficient Cellular Network Optimization
    He, Pengcheng
    Lu, Siyuan
    Xu, Fan
    Kang, Yibin
    Yan, Qi
    Shi, Qingjiang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (11) : 17522 - 17538