A One-Sample Decentralized Proximal Algorithm for Non-Convex Stochastic Composite Optimization

被引:0
|
作者
Xiao, Tesi [1 ]
Chen, Xuxing [2 ]
Balasubramanian, Krishnakumar [1 ]
Ghadimi, Saeed [3 ]
机构
[1] Univ Calif, Dept Stat, Davis, CA 95616 USA
[2] Univ Calif, Dept Math, Davis, CA USA
[3] Univ Waterloo, Dept Management Sci, Waterloo, ON, Canada
来源
基金
加拿大自然科学与工程研究理事会;
关键词
DISTRIBUTED OPTIMIZATION; CONVERGENCE; CONSENSUS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We focus on decentralized stochastic non-convex optimization, where n agents work together to optimize a composite objective function which is a sum of a smooth term and a non-smooth convex term. To solve this problem, we propose two single-time scale algorithms: Prox-DASA and Prox-DASA-GT. These algorithms can find epsilon-stationary points in O(n(-1)epsilon(-2)) iterations using constant batch sizes (i.e., O(1)). Unlike prior work, our algorithms achieve a comparable complexity result without requiring large batch sizes, more complex per-iteration operations (such as double loops), or stronger assumptions. Our theoretical findings are supported by extensive numerical experiments, which demonstrate the superiority of our algorithms over previous approaches. Our code is available at https://github.com/xuxingc/ProxDASA.
引用
收藏
页码:2324 / 2334
页数:11
相关论文
共 50 条
  • [1] Stochastic proximal quasi-Newton methods for non-convex composite optimization
    Wang, Xiaoyu
    Wang, Xiao
    Yuan, Ya-xiang
    OPTIMIZATION METHODS & SOFTWARE, 2019, 34 (05): : 922 - 948
  • [2] Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization
    Gersende Fort
    Eric Moulines
    Statistics and Computing, 2023, 33 (3)
  • [3] Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization
    Fort, Gersende
    Moulines, Eric
    STATISTICS AND COMPUTING, 2023, 33 (03)
  • [4] A PROJECTION-FREE DECENTRALIZED ALGORITHM FOR NON-CONVEX OPTIMIZATION
    Wai, Hoi-To
    Scaglione, Anna
    Lafond, Jean
    Moulines, Eric
    2016 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2016, : 475 - 479
  • [5] Faster One-Sample Stochastic Conditional Gradient Method for Composite Convex Minimization
    Dresdner, Gideon
    Vladarean, Maria-Luiza
    Raetsch, Gunnar
    Locatello, Francesco
    Cevher, Volkan
    Yurtsever, Alp
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [6] An Improved Convergence Analysis for Decentralized Online Stochastic Non-Convex Optimization
    Xin, Ran
    Khan, Usman A.
    Kar, Soummya
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 1842 - 1858
  • [7] Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic Decentralized Optimization
    Yuan, Kun
    Huang, Xinmeng
    Chen, Yiming
    Zhang, Xiaohan
    Zhang, Yingya
    Pan, Pan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex Optimization
    Xin, Ran
    Khan, Usman A.
    Kar, Soummya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [9] Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization
    Metel, Michael R.
    Takeda, Akiko
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [10] Stochastic proximal methods for non-smooth non-convex constrained sparse optimization
    Metel, Michael R.
    Takeda, Akiko
    Journal of Machine Learning Research, 2021, 22