Numerical methods for distributed stochastic compositional optimization problems with aggregative structure

被引:0
|
作者
Zhao, Shengchao [1 ]
Liu, Yongchao [2 ]
机构
[1] China Univ Min & Technol, Sch Math, Xuzhou, Peoples R China
[2] Dalian Univ Technol, Sch Math Sci, Dalian, Peoples R China
关键词
Distributed stochastic compositional optimization; aggregative structure; hybrid variance reduction technique; dynamic consensus mechanism; communication compression; GRADIENT DESCENT; ALGORITHMS;
D O I
10.1080/10556788.2024.2381214
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The paper studies the distributed stochastic compositional optimization problems over networks, where all the agents' inner-level function is the sum of each agent's private expectation function. Focusing on the aggregative structure of the inner-level function, we employ the hybrid variance reduction method to obtain the information on each agent's private expectation function, and apply the dynamic consensus mechanism to track the information on each agent's inner-level function. Then by combining with the standard distributed stochastic gradient descent method, we propose a distributed aggregative stochastic compositional gradient descent method. When the objective function is smooth, the proposed method achieves the convergence rate $ \mathcal {O}(K<^>{-1/2}) $ O(K-1/2). We further combine the proposed method with the communication compression and propose the communication compressed variant distributed aggregative stochastic compositional gradient descent method. The compressed variant of the proposed method maintains the convergence rate $ \mathcal {O}(K<^>{-1/2}) $ O(K-1/2). Simulated experiments on decentralized reinforcement learning verify the effectiveness of the proposed methods.
引用
收藏
页数:32
相关论文
共 50 条
  • [31] Solving Stochastic Compositional Optimization is Nearly as Easy as Solving Stochastic Optimization
    Chen, Tianyi
    Sun, Yuejiao
    Yin, Wotao
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 4937 - 4948
  • [32] Decomposition of Convex High Dimensional Aggregative Stochastic Control Problems
    Adrien Seguret
    Clemence Alasseur
    J. Frédéric Bonnans
    Antonio De Paola
    Nadia Oudjane
    Vincenzo Trovato
    [J]. Applied Mathematics & Optimization, 2023, 88
  • [33] Decomposition of Convex High Dimensional Aggregative Stochastic Control Problems
    Seguret, Adrien
    Alasseur, Clemence
    Bonnans, J. Frederic
    De Paola, Antonio
    Oudjane, Nadia
    Trovato, Vincenzo
    [J]. APPLIED MATHEMATICS AND OPTIMIZATION, 2023, 88 (01):
  • [34] NUMERICAL-METHODS FOR STOCHASTIC SINGULAR CONTROL-PROBLEMS
    KUSHNER, HJ
    MARTINS, LF
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 1991, 29 (06) : 1443 - 1475
  • [35] Study of convergence rates of numerical methods for stochastic control problems
    Song, Q. S.
    Yin, G.
    [J]. PROCEEDINGS OF THE 46TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-14, 2007, : 1162 - 1167
  • [36] Numerical methods for stochastic systems preserving symplectic structure
    Milstein, GN
    Repin, YM
    Tretyakov, MV
    [J]. SIAM JOURNAL ON NUMERICAL ANALYSIS, 2002, 40 (04) : 1583 - 1604
  • [37] Distributed mirror descent method with operator extrapolation for stochastic aggregative games
    Wang, Tongyu
    Yi, Peng
    Chen, Jie
    [J]. AUTOMATICA, 2024, 159
  • [38] Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization
    Nedic, Angelia
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (03) : 92 - 101
  • [39] Distributed No-Regret Learning for Stochastic Aggregative Games over Networks
    Lei, Jinlong
    Yi, Peng
    Li, Li
    [J]. 2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 7512 - 7519
  • [40] Performance of Classic Numerical Methods in Unrestricted Optimization Problems
    Eisermann, Jonatan Ismael
    Brito, Maritza Camilli Almeida
    [J]. ABAKOS, 2021, 9 (02): : 25 - 47