Zeroth-Order Methods for Online Distributed Optimization with Strongly Pseudoconvex Cost Functions

被引:0
|
作者
Xiaoxi YAN [1 ]
Muyuan MA [1 ]
Kaihong LU [2 ]
机构
[1] School of Electrical and Information Engineering, Jiangsu University
[2] School of Electrical Engineering and Automation, Shandong University of Science and Technology
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
O224 [最优化的数学理论];
学科分类号
070105 ; 1201 ;
摘要
This paper studies an online distributed optimization problem over multi-agent systems.In this problem, the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions. Different from most existing works on distributed optimization, here we consider the case where the cost function is strongly pseudoconvex and real gradients of objective functions are not available. To handle this problem, an online zeroth-order stochastic optimization algorithm involving the single-point gradient estimator is proposed. Under the algorithm, each agent only has access to the information associated with its own cost function and the estimate of the gradient, and exchange local state information with its immediate neighbors via a time-varying digraph. The performance of the algorithm is measured by the expectation of dynamic regret. Under mild assumptions on graphs,we prove that if the cumulative deviation of minimizer sequence grows within a certain rate, then the expectation of dynamic regret grows sublinearly. Finally, a simulation example is given to illustrate the validity of our results.
引用
收藏
页码:145 / 160
页数:16
相关论文
共 50 条
  • [1] Online Distributed Optimization With Strongly Pseudoconvex-Sum Cost Functions
    Lu, Kaihong
    Jing, Gangshan
    Wang, Long
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (01) : 426 - 433
  • [2] Online distributed optimization with strongly pseudoconvex-sum cost functions and coupled inequality constraints
    Lu, Kaihong
    Xu, Hang
    [J]. AUTOMATICA, 2023, 156
  • [3] Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions
    Yan, Xiaoxi
    Li, Cheng
    Lu, Kaihong
    Xu, Hang
    [J]. CONTROL THEORY AND TECHNOLOGY, 2024, 22 (01) : 14 - 24
  • [4] Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions
    Xiaoxi Yan
    Cheng Li
    Kaihong Lu
    Hang Xu
    [J]. Control Theory and Technology, 2024, 22 : 14 - 24
  • [5] Asynchronous Zeroth-Order Distributed Optimization with Residual Feedback
    Shen, Yi
    Zhang, Yan
    Nivison, Scott
    Bell, Zachary, I
    Zavlanos, Michael M.
    [J]. 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 3349 - 3354
  • [6] Zeroth-Order Method for Distributed Optimization With Approximate Projections
    Yuan, Deming
    Ho, Daniel W. C.
    Xu, Shengyuan
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (02) : 284 - 294
  • [7] Zeroth-order algorithms for stochastic distributed nonconvex optimization
    Yi, Xinlei
    Zhang, Shengjun
    Yang, Tao
    Johansson, Karl H.
    [J]. AUTOMATICA, 2022, 142
  • [8] Zeroth-order Gradient Tracking for Distributed Constrained Optimization
    Cheng, Songsong
    Yu, Xin
    Fan, Yuan
    Xiao, Gaoxi
    [J]. IFAC PAPERSONLINE, 2023, 56 (02): : 5197 - 5202
  • [9] Communication-Efficient Zeroth-Order Distributed Online Optimization: Algorithm, Theory, and Applications
    Kaya, Ege C.
    Sahin, Mehmet Berk
    Hashemi, Abolfazl
    [J]. IEEE ACCESS, 2023, 11 : 61173 - 61191
  • [10] A zeroth-order algorithm for distributed optimization with stochastic stripe observations
    Yinghui Wang
    Xianlin Zeng
    Wenxiao Zhao
    Yiguang Hong
    [J]. Science China Information Sciences, 2023, 66