On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization

被引:0
|
作者
Andre Milzarek
Xiantao Xiao
Zaiwen Wen
Michael Ulbrich
机构
[1] The Chinese University of Hong Kong,School of Data Science
[2] Shenzhen Research Institute of Big Data,School of Mathematical Sciences
[3] Shenzhen Institute of Artificial Intelligence and Robotics for Society,Beijing International Center for Mathematical Research
[4] Dalian University of Technology,Department of Mathematics
[5] Peking University,undefined
[6] Technical University of Munich,undefined
来源
Science China Mathematics | 2022年 / 65卷
关键词
nonsmooth stochastic optimization; stochastic approximation; semismooth Newton method; stochastic second-order information; local convergence; 49M15; 65C60; 65K05; 90C06;
D O I
暂无
中图分类号
学科分类号
摘要
In this work, we present probabilistic local convergence results for a stochastic semismooth Newton method for a class of stochastic composite optimization problems involving the sum of smooth nonconvex and nonsmooth convex terms in the objective function. We assume that the gradient and Hessian information of the smooth part of the objective function can only be approximated and accessed via calling stochastic first- and second-order oracles. The approach combines stochastic semismooth Newton steps, stochastic proximal gradient steps and a globalization strategy based on growth conditions. We present tail bounds and matrix concentration inequalities for the stochastic oracles that can be utilized to control the approximation errors via appropriately adjusting or increasing the sampling rates. Under standard local assumptions, we prove that the proposed algorithm locally turns into a pure stochastic semismooth Newton method and converges r-linearly or r-superlinearly with high probability.
引用
收藏
页码:2151 / 2170
页数:19
相关论文
共 50 条
  • [1] On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
    Milzarek, Andre
    Xiao, Xiantao
    Wen, Zaiwen
    Ulbrich, Michael
    [J]. SCIENCE CHINA-MATHEMATICS, 2022, 65 (10) : 2151 - 2170
  • [2] On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
    Andre Milzarek
    Xiantao Xiao
    Zaiwen Wen
    Michael Ulbrich
    [J]. Science China Mathematics, 2022, 65 (10) : 2151 - 2170
  • [3] A STOCHASTIC SEMISMOOTH NEWTON METHOD FOR NONSMOOTH NONCONVEX OPTIMIZATION
    Milzarek, Andre
    Xiao, Xiantao
    Cen, Shicong
    Wen, Zaiwen
    Ulbrich, Michael
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2019, 29 (04) : 2916 - 2948
  • [4] Convergence of a stochastic subgradient method with averaging for nonsmooth nonconvex constrained optimization
    Ruszczynski, Andrzej
    [J]. OPTIMIZATION LETTERS, 2020, 14 (07) : 1615 - 1625
  • [5] Convergence of a stochastic subgradient method with averaging for nonsmooth nonconvex constrained optimization
    Andrzej Ruszczyński
    [J]. Optimization Letters, 2020, 14 : 1615 - 1625
  • [6] A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
    Minghan Yang
    Andre Milzarek
    Zaiwen Wen
    Tong Zhang
    [J]. Mathematical Programming, 2022, 194 : 257 - 303
  • [7] A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
    Yang, Minghan
    Milzarek, Andre
    Wen, Zaiwen
    Zhang, Tong
    [J]. MATHEMATICAL PROGRAMMING, 2022, 194 (1-2) : 257 - 303
  • [8] A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization
    Ouyang, Wenqing
    Milzarek, Andre
    [J]. MATHEMATICAL PROGRAMMING, 2024,
  • [9] Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization
    Yu. M. Ermol'ev
    V. I. Norkin
    [J]. Cybernetics and Systems Analysis, 1998, 34 : 196 - 215
  • [10] Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization
    Ermol'ev, YM
    Norkin, VI
    [J]. CYBERNETICS AND SYSTEMS ANALYSIS, 1998, 34 (02) : 196 - 215