Stochastic mirror descent method for distributed multi-agent optimization

被引:19
|
作者
Li, Jueyou [1 ,2 ]
Li, Guoquan [1 ]
Wu, Zhiyou [1 ]
Wu, Changzhi [3 ]
机构
[1] Chongqing Normal Univ, Sch Math Sci, Chongqing 400047, Peoples R China
[2] Univ Sydney, Sch Elect & Informat Engn, Sydney, NSW 2006, Australia
[3] Curtin Univ, Sch Built Environm, Bentley, WA 6102, Australia
关键词
Distributed algorithm; Multi-agent network; Mirror descent; Stochastic approximation; Convex optimization; GRADIENT-FREE METHOD; CONVEX-OPTIMIZATION; SUBGRADIENT METHODS; ALGORITHMS; CONSENSUS; NETWORKS;
D O I
10.1007/s11590-016-1071-z
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
This paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm.
引用
收藏
页码:1179 / 1197
页数:19
相关论文
共 50 条
  • [1] Stochastic mirror descent method for distributed multi-agent optimization
    Jueyou Li
    Guoquan Li
    Zhiyou Wu
    Changzhi Wu
    [J]. Optimization Letters, 2018, 12 : 1179 - 1197
  • [2] Distributed mirror descent method for multi-agent optimization with delay
    Li, Jueyou
    Chen, Guo
    Dong, Zhaoyang
    Wu, Zhiyou
    [J]. NEUROCOMPUTING, 2016, 177 : 643 - 650
  • [3] Multi-Agent Mirror Descent for Decentralized Stochastic Optimization
    Rabbat, Michael
    [J]. 2015 IEEE 6TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), 2015, : 517 - 520
  • [4] Stabilized distributed online mirror descent for multi-agent optimization
    Wu, Ping
    Huang, Heyan
    Lu, Haolin
    Liu, Zhengyang
    [J]. Knowledge-Based Systems, 2024, 304
  • [5] Quantizer-based distributed mirror descent for multi-agent convex optimization
    Xiong, Menghui
    Zhang, Baoyong
    Yuan, Deming
    [J]. PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 3485 - 3490
  • [6] A Flexible Stochastic Multi-Agent ADMM Method for Large-Scale Distributed Optimization
    Wu, Lin
    Wang, Yongbin
    Shi, Tuo
    [J]. IEEE ACCESS, 2022, 10 : 19045 - 19059
  • [7] A STOCHASTIC MIRROR-DESCENT ALGORITHM FOR SOLVING AXB = C OVER AN MULTI-AGENT SYSTEM
    Wang, Yinghui
    Cheng, Songsong
    [J]. KYBERNETIKA, 2021, 57 (02) : 256 - 271
  • [8] Distributed Heterogeneous Multi-Agent Optimization with Stochastic Sub-Gradient
    Hu, Haokun
    Mo, Lipo
    Cao, Xianbing
    [J]. Journal of Systems Science and Complexity, 2024, 37 (04) : 1470 - 1487
  • [9] Distributed Heterogeneous Multi-Agent Optimization with Stochastic Sub-Gradient
    HU Haokun
    MO Lipo
    CAO Xianbing
    [J]. Journal of Systems Science & Complexity, 2024, 37 (04) : 1470 - 1487
  • [10] Optimal distributed stochastic mirror descent for strongly convex optimization
    Yuan, Deming
    Hong, Yiguang
    Ho, Daniel W. C.
    Jiang, Guoping
    [J]. AUTOMATICA, 2018, 90 : 196 - 203