A Variance Reducing Stochastic Proximal Method with Acceleration Techniques

被引:0
|
作者
Lei, Jialin [1 ]
Zhang, Ying [1 ]
Zhang, Zhao [1 ]
机构
[1] Zhejiang Normal Univ, Sch Math Sci, Jinhua 321004, Peoples R China
来源
TSINGHUA SCIENCE AND TECHNOLOGY | 2023年 / 28卷 / 06期
关键词
composite optimization; Variance Reduction (VR); fast Douglas-Rachford (DR) splitting; proximal operator;
D O I
10.26599/TST.2022.9010051
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider a fundamental problem in the field of machine learning-structural risk minimization, which can be represented as the average of a large number of smooth component functions plus a simple and convex (but possibly non-smooth) function. In this paper, we propose a novel proximal variance reducing stochastic method building on the introduced Point-SAGA. Our method achieves two proximal operator calculations by combining the fast Douglas-Rachford splitting and refers to the scheme of the FISTA algorithm in the choice of momentum factors. We show that the objective function value converges to the iteration point at the rate of O(1/k) when each loss function is convex and smooth. In addition, we prove that our method achieves a linear convergence rate for strongly convex and smooth loss functions. Experiments demonstrate the effectiveness of the proposed algorithm, especially when the loss function is ill-conditioned with good acceleration.
引用
收藏
页码:999 / 1008
页数:10
相关论文
共 50 条
  • [1] Stochastic Proximal Gradient Descent with Acceleration Techniques
    Nitanda, Atsushi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [2] Variance Reduction Techniques for Stochastic Proximal Point Algorithms
    Traore, Cheik
    Apidopoulos, Vassilis
    Salzo, Saverio
    Villa, Silvia
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2024, 203 (02) : 1910 - 1939
  • [3] PROXIMAL STOCHASTIC GRADIENT METHOD WITH PROGRESSIVE VARIANCE REDUCTION
    Xiao, Lin
    Zhang, Tong
    SIAM JOURNAL ON OPTIMIZATION, 2014, 24 (04) : 2057 - 2075
  • [4] Adaptive Proximal Average Based Variance Reducing Stochastic Methods for Optimization with Composite Regularization
    Liu, Jingchang
    Xu, Linli
    Guo, Junliang
    Sheng, Xin
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 1552 - 1559
  • [5] An accelerated variance reducing stochastic method with Douglas-Rachford splitting
    Jingchang Liu
    Linli Xu
    Shuheng Shen
    Qing Ling
    Machine Learning, 2019, 108 : 859 - 878
  • [6] An accelerated variance reducing stochastic method with Douglas-Rachford splitting
    Liu, Jingchang
    Xu, Linli
    Shen, Shuheng
    Ling, Qing
    MACHINE LEARNING, 2019, 108 (05) : 859 - 878
  • [7] Momentum-Based Variance-Reduced Proximal Stochastic Gradient Method for Composite Nonconvex Stochastic Optimization
    Xu, Yangyang
    Xu, Yibo
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2023, 196 (01) : 266 - 297
  • [8] Momentum-Based Variance-Reduced Proximal Stochastic Gradient Method for Composite Nonconvex Stochastic Optimization
    Yangyang Xu
    Yibo Xu
    Journal of Optimization Theory and Applications, 2023, 196 : 266 - 297
  • [9] Asynchronous Stochastic Proximal Optimization Algorithms with Variance Reduction
    Meng, Qi
    Chen, Wei
    Yu, Jingcheng
    Wang, Taifeng
    Ma, Zhi-Ming
    Liu, Tie-Yan
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2329 - 2335
  • [10] Accelerated proximal stochastic variance reduction for DC optimization
    Lulu He
    Jimin Ye
    Jianwei E
    Neural Computing and Applications, 2021, 33 : 13163 - 13181