Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

被引:560
|
作者
Ram, S. Sundhar [2 ]
Nedic, A. [1 ]
Veeravalli, V. V. [2 ]
机构
[1] Univ Illinois, Ind & Enterprise Syst Engn Dept, Urbana, IL 61801 USA
[2] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61801 USA
关键词
Distributed algorithm; Convex optimization; Subgradient methods; Stochastic approximation; CONSENSUS; CONVERGENCE;
D O I
10.1007/s10957-010-9737-7
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.
引用
收藏
页码:516 / 545
页数:30
相关论文
共 50 条
  • [31] The efficiency of subgradient projection methods for convex optimization .2. Implementations and extensions
    Kiwiel, KC
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 1996, 34 (02) : 677 - 697
  • [32] Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
    Yuan, Deming
    Xu, Shengyuan
    Zhang, Baoyong
    Rong, Lina
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2013, 23 (16) : 1846 - 1868
  • [33] DISTRIBUTED SUBGRADIENT-FREE STOCHASTIC OPTIMIZATION ALGORITHM FOR NONSMOOTH CONVEX FUNCTIONS OVER TIME-VARYING NETWORKS
    Wang, Yinghui
    Zhao, Wenxiao
    Hong, Yiguang
    Zamani, Mohsen
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2019, 57 (04) : 2821 - 2842
  • [34] A CRITICAL EVALUATION OF STOCHASTIC ALGORITHMS FOR CONVEX OPTIMIZATION
    Wiesler, Simon
    Richard, Alexander
    Schlueter, Ralf
    Ney, Hermann
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 6955 - 6959
  • [35] Stochastic block projection algorithms with extrapolation for convex feasibility problems
    Necoara, I
    OPTIMIZATION METHODS & SOFTWARE, 2022, 37 (05): : 1845 - 1875
  • [36] A Stochastic Newton Algorithm for Distributed Convex Optimization
    Bullins, Brian
    Patel, Kumar Kshitij
    Shamir, Ohad
    Srebro, Nathan
    Woodworth, Blake
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [37] Fully distributed algorithms for convex optimization problems
    Mosk-Aoyama, Damon
    Roughgarden, Tim
    Shah, Devavrat
    DISTRIBUTED COMPUTING, PROCEEDINGS, 2007, 4731 : 492 - +
  • [38] FULLY DISTRIBUTED ALGORITHMS FOR CONVEX OPTIMIZATION PROBLEMS
    Mosk-Aoyama, Damon
    Roughgarden, Tim
    Shah, Devavrat
    SIAM JOURNAL ON OPTIMIZATION, 2010, 20 (06) : 3260 - 3279
  • [39] Optimal subgradient algorithms for large-scale convex optimization in simple domains
    Masoud Ahookhosh
    Arnold Neumaier
    Numerical Algorithms, 2017, 76 : 1071 - 1097
  • [40] Optimal subgradient algorithms for large-scale convex optimization in simple domains
    Ahookhosh, Masoud
    Neumaier, Arnold
    NUMERICAL ALGORITHMS, 2017, 76 (04) : 1071 - 1097