Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions

被引:0
|
作者
Yan, Xiaoxi [1 ]
Li, Cheng [1 ]
Lu, Kaihong [2 ]
Xu, Hang [2 ]
机构
[1] Jiangsu Univ, Sch Elect & Informat Engn, Zhenjiang 212013, Jiangsu, Peoples R China
[2] Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Shandong, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Multi-agent system; Online distributed optimization; Pseudoconvex optimization; Random gradient-free method; PSEUDOMONOTONE VARIATIONAL-INEQUALITIES; MIXED EQUILIBRIUM PROBLEMS; CONVEX-OPTIMIZATION; MULTIAGENT OPTIMIZATION; ALGORITHMS;
D O I
10.1007/s11768-023-00181-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results.
引用
收藏
页码:14 / 24
页数:11
相关论文
共 50 条
  • [21] Exact Convergence of Gradient-Free Distributed Optimization Method in a Multi-Agent System
    Pang, Yipeng
    Hu, Guoqiang
    2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2018, : 5728 - 5733
  • [22] Distributed quantized random gradient-free algorithm with event triggered communication
    Xie Y.-B.
    Gao W.-H.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2021, 38 (08): : 1175 - 1187
  • [23] Distributed gradient-free and projection-free algorithm for stochastic constrained optimization
    Hou J.
    Zeng X.
    Chen C.
    Autonomous Intelligent Systems, 2024, 4 (01):
  • [24] Gradient-Free Method for Heavily Constrained Nonconvex Optimization
    Shi, Wanli
    Gao, Hongchang
    Gu, Bin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [25] Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization
    Yu, Zhan
    Ho, Daniel W. C.
    Yuan, Deming
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (02) : 957 - 964
  • [26] Multiobjective Optimization for Turbofan Engine Using Gradient-Free Method
    Chen, Ran
    Li, Yuzhe
    Sun, Xi-Ming
    Chai, Tianyou
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024, 54 (07): : 4345 - 4357
  • [27] Gradient-free method for distributed multi-agent optimization via push-sum algorithms
    Yuan, Deming
    Xu, Shengyuan
    Lu, Junwei
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2015, 25 (10) : 1569 - 1580
  • [28] Distributed Quantized Gradient-Free Algorithm for Multi-Agent Convex Optimization
    Ding, Jingjing
    Yuan, Deming
    Jiang, Guoping
    Zhou, Yingjiang
    2017 29TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC), 2017, : 6431 - 6435
  • [29] Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm
    Akhavan, Arya
    Chzhen, Evgenii
    Pontil, Massimiliano
    Tsybakov, Alexandre B.
    arXiv, 2023,
  • [30] Asynchronous Gossip-Based Gradient-Free Method for Multiagent Optimization
    Yuan, Deming
    ABSTRACT AND APPLIED ANALYSIS, 2014,