Distributed Stochastic Proximal Algorithm With Random Reshuffling for Nonsmooth Finite-Sum Optimization

被引:1
|
作者
Jiang, Xia [1 ,2 ]
Zeng, Xianlin [1 ,2 ]
Sun, Jian [1 ,2 ]
Chen, Jie [3 ,4 ,5 ]
Xie, Lihua [6 ]
机构
[1] Beijing Inst Technol, Sch Automat, Key Lab Intelligent Control & Decis Complex Syst, Beijing, Peoples R China
[2] Beijing Inst Technol, Chongqing Innovat Ctr, Chongqing 401120, Peoples R China
[3] Tongji Univ, Sch Elect & Informat Engn, Shanghai 200082, Peoples R China
[4] Beijing Inst Technol, Beijing Adv Innovat Ctr Intelligent Robots & Syst, Beijing 100081, Peoples R China
[5] Beijing Inst Technol, Key Lab Biomimet Robots & Syst, Minist Educ, Beijing 100081, Peoples R China
[6] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Distributed optimization; proximal operator; random reshuffling (RR); stochastic algorithm; time-varying graphs; GRADIENT ALGORITHMS; SUBGRADIENT METHODS;
D O I
10.1109/TNNLS.2022.3201711
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The nonsmooth finite-sum minimization is a fundamental problem in machine learning. This article develops a distributed stochastic proximal-gradient algorithm with random reshuffling to solve the finite-sum minimization over time-varying multiagent networks. The objective function is a sum of differentiable convex functions and nonsmooth regularization. Each agent in the network updates local variables by local information exchange and cooperates to seek an optimal solution. We prove that local variable estimates generated by the proposed algorithm achieve consensus and are attracted to a neighborhood of the optimal solution with an O((1/T)+(1/root T)) convergence rate, where T is the total number of iterations. Finally, some comparative simulations are provided to verify the convergence performance of the proposed algorithm.
引用
收藏
页码:4082 / 4096
页数:15
相关论文
共 50 条
  • [31] Fast incremental expectation maximization for finite-sum optimization: nonasymptotic convergence
    Fort, G.
    Gach, P.
    Moulines, E.
    [J]. STATISTICS AND COMPUTING, 2021, 31 (04)
  • [32] FAST DECENTRALIZED NONCONVEX FINITE-SUM OPTIMIZATION WITH RECURSIVE VARIANCE REDUCTION
    Xin, Ran
    Khan, Usman A.
    Kar, Soummya
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2022, 32 (01) : 1 - 28
  • [33] Lower Complexity Bounds of Finite-Sum Optimization Problems: The Results and Construction
    Han, Yuze
    Xie, Guangzeng
    Zhang, Zhihua
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [34] Non-Convex Finite-Sum Optimization Via SCSG Methods
    Lei, Lihua
    Ju, Cheng
    Chen, Jianbo
    Jordan, Michael, I
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [35] Finite-sum Composition Optimization via Variance Reduced Gradient Descent
    Lian, Xiangru
    Wang, Mengdi
    Liu, Ji
    [J]. ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 54, 2017, 54 : 1159 - 1167
  • [36] Lower Complexity Bounds of Finite-Sum Optimization Problems: The Results and Construction
    Han, Yuze
    Xie, Guangzeng
    Zhang, Zhihua
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 86
  • [37] Fast incremental expectation maximization for finite-sum optimization: nonasymptotic convergence
    G. Fort
    P. Gach
    E. Moulines
    [J]. Statistics and Computing, 2021, 31
  • [38] Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization
    Kavis, Ali
    Skoulakis, Stratis
    Antonakopoulos, Kimon
    Dadi, Leello Tadesse
    Cevher, Volkan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [39] Almost sure convergence of random projected proximal and subgradient algorithms for distributed nonsmooth convex optimization
    Iiduka, Hideaki
    [J]. OPTIMIZATION, 2017, 66 (01) : 35 - 59
  • [40] Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization
    Das, Aniket
    Schölkopf, Bernhard
    Muehlebach, Michael
    [J]. Advances in Neural Information Processing Systems, 2022, 35