Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization

被引:0
|
作者
Yu, Yue [1 ]
Huang, Longbo [1 ]
机构
[1] Tsinghua Univ, Inst Interdisciplinary Informat Sci, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the stochastic composition optimization problem proposed in [Wang et al., 2016a], which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVR-ADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of O(log S/S), which improves upon the O(S-4/9) rate in [Wang et al., 2016b] when the objective is convex and Lipschitz smooth. Moreover, com-SVR-ADMM possesses a rate of O(1/root S) when the objective is convex but without Lipschitz smoothness. We also conduct experiments and show that it outperforms existing algorithms.
引用
下载
收藏
页码:3364 / 3370
页数:7
相关论文
共 50 条
  • [11] Estimate Sequences for Variance-Reduced Stochastic Composite Optimization
    Kulunchakov, Andrei
    Mairal, Julien
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [12] Stochastic Variance-Reduced Cubic Regularization for Nonconvex Optimization
    Wang, Zhe
    Zhou, Yi
    Liang, Yingbin
    Lan, Guanghui
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [13] MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
    Condat, Laurent
    Richtarik, Peter
    MATHEMATICAL AND SCIENTIFIC MACHINE LEARNING, VOL 190, 2022, 190
  • [14] Variance-reduced HMM for Stochastic Slow-Fast Systems
    Melis, Ward
    Samaey, Giovanni
    INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE 2016 (ICCS 2016), 2016, 80 : 1255 - 1266
  • [15] A Variance Reduced Nonconvex Stochastic Optimization framework for Online Kernel Learning
    Pradhan, Hrusikesha
    Rajawat, Ketan
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 1281 - 1285
  • [16] An inexact accelerated stochastic ADMM for separable convex optimization
    Bai, Jianchao
    Hager, William W.
    Zhang, Hongchao
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2022, 81 (02) : 479 - 518
  • [17] Momentum-Based Variance-Reduced Proximal Stochastic Gradient Method for Composite Nonconvex Stochastic Optimization
    Xu, Yangyang
    Xu, Yibo
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2023, 196 (01) : 266 - 297
  • [18] Momentum-Based Variance-Reduced Proximal Stochastic Gradient Method for Composite Nonconvex Stochastic Optimization
    Yangyang Xu
    Yibo Xu
    Journal of Optimization Theory and Applications, 2023, 196 : 266 - 297
  • [19] An inexact accelerated stochastic ADMM for separable convex optimization
    Jianchao Bai
    William W. Hager
    Hongchao Zhang
    Computational Optimization and Applications, 2022, 81 : 479 - 518
  • [20] Stochastic Variance Reduction for Nonconvex Optimization
    Reddi, Sashank J.
    Hefny, Ahmed
    Sra, Suvrit
    Poczos, Barnabas
    Smola, Alex
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48