Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions

被引:0
|
作者
Yan, Xiaoxi [1 ]
Li, Cheng [1 ]
Lu, Kaihong [2 ]
Xu, Hang [2 ]
机构
[1] Jiangsu Univ, Sch Elect & Informat Engn, Zhenjiang 212013, Jiangsu, Peoples R China
[2] Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Shandong, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Multi-agent system; Online distributed optimization; Pseudoconvex optimization; Random gradient-free method; PSEUDOMONOTONE VARIATIONAL-INEQUALITIES; MIXED EQUILIBRIUM PROBLEMS; CONVEX-OPTIMIZATION; MULTIAGENT OPTIMIZATION; ALGORITHMS;
D O I
10.1007/s11768-023-00181-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results.
引用
收藏
页码:14 / 24
页数:11
相关论文
共 50 条
  • [1] Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions
    Xiaoxi Yan
    Cheng Li
    Kaihong Lu
    Hang Xu
    Control Theory and Technology, 2024, 22 : 14 - 24
  • [2] Online Distributed Optimization With Strongly Pseudoconvex-Sum Cost Functions
    Lu, Kaihong
    Jing, Gangshan
    Wang, Long
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (01) : 426 - 433
  • [3] A gradient-free distributed optimization method for convex sum of nonconvex cost functions
    Pang, Yipeng
    Hu, Guoqiang
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2022, 32 (14) : 8086 - 8101
  • [4] Randomized Gradient-Free Distributed Online Optimization with Time-Varying Cost Functions
    Pang, Yipeng
    Hu, Guoqiang
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 4910 - 4915
  • [5] Distributed Online Optimization With Gradient-free Design
    Wang, Lingfei
    Wang, Yinghui
    Hong, Yiguang
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 5677 - 5682
  • [6] Zeroth-Order Methods for Online Distributed Optimization with Strongly Pseudoconvex Cost Functions
    Xiaoxi YAN
    Muyuan MA
    Kaihong LU
    Journal of Systems Science and Information, 2024, 12 (01) : 145 - 160
  • [7] Gradient-free method for nonsmooth distributed optimization
    Li, Jueyou
    Wu, Changzhi
    Wu, Zhiyou
    Long, Qiang
    JOURNAL OF GLOBAL OPTIMIZATION, 2015, 61 (02) : 325 - 340
  • [8] Gradient-free algorithms for distributed online convex optimization
    Liu, Yuhang
    Zhao, Wenxiao
    Dong, Daoyi
    ASIAN JOURNAL OF CONTROL, 2023, 25 (04) : 2451 - 2468
  • [9] Gradient-free method for nonsmooth distributed optimization
    Jueyou Li
    Changzhi Wu
    Zhiyou Wu
    Qiang Long
    Journal of Global Optimization, 2015, 61 : 325 - 340
  • [10] Online distributed optimization with strongly pseudoconvex-sum cost functions and coupled inequality constraints
    Lu, Kaihong
    Xu, Hang
    AUTOMATICA, 2023, 156