A descent subgradient method using Mifflin's line search for nonsmooth nonconvex optimization

被引:1
|
作者
Maleknia, Morteza [1 ,2 ]
Soleimani-Damaneh, Majid [1 ]
机构
[1] Univ Tehran, Coll Sci, Sch Math Stat & Comp Sci, ICOL Ind & Computat Optimizat Lab, Tehran, Iran
[2] Isfahan Univ Technol, Dept Math Sci, Esfahan, Iran
基金
美国国家科学基金会;
关键词
Nonlinear optimization; nonsmooth optimization; nonconvex programming; subgradient; GRADIENT SAMPLING ALGORITHM; VARIABLE-METRIC METHOD; BUNDLE METHODS; CONVERGENCE;
D O I
10.1080/02331934.2024.2322152
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We propose a descent subgradient algorithm for minimizing a function $ f:\mathbb {R}<^>n\to \mathbb {R} $ f:Rn -> R, assumed to be locally Lipschitz, but not necessarily smooth or convex. To find an effective descent direction, the Goldstein epsilon-subdifferential is approximated through an iterative process. The method enjoys a new two-point variant of Mifflin's line search in which the subgradients are arbitrary. Thus, the line search procedure is easy to implement. Moreover, in comparison to bundle methods, the quadratic subproblems have a simple structure, and to handle nonconvexity the proposed method requires no algorithmic modification. We study the global convergence of the method and prove that any accumulation point of the generated sequence is Clarke stationary, assuming that the objective f is weakly upper semismooth. We illustrate the efficiency and effectiveness of the proposed algorithm on a collection of academic and semi-academic test problems.
引用
收藏
页数:27
相关论文
共 50 条
  • [41] Direct Search and Stochastic Optimization Applied to Two Nonconvex Nonsmooth Problems
    Easterling, David R.
    Watson, Layne T.
    Madigan, Michael L.
    Castle, Brent S.
    Trosset, Michael W.
    [J]. HIGH PERFORMANCE COMPUTING SYMPOSIUM 2012 (HPC 2012), 2012, 44 (06): : 66 - 72
  • [42] A Descent Method for Nonsmooth Multiobjective Optimization in Hilbert Spaces
    Sonntag, Konstantin
    Gebken, Bennet
    Mueller, Georg
    Peitz, Sebastian
    Volkwein, Stefan
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2024, : 455 - 487
  • [43] Descent direction method with line search for unconstrained optimization in noisy environment
    Krejic, Natasa
    Luzanin, Zorana
    Ovcin, Zoran
    Stojkovska, Irena
    [J]. OPTIMIZATION METHODS & SOFTWARE, 2015, 30 (06): : 1164 - 1184
  • [44] An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization
    Liu, Ruyu
    Pan, Shaohua
    Wu, Yuqia
    Yang, Xiaoqi
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2024, 88 (02) : 603 - 641
  • [45] Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization
    Ermol'ev, YM
    Norkin, VI
    [J]. CYBERNETICS AND SYSTEMS ANALYSIS, 1998, 34 (02) : 196 - 215
  • [46] A filter proximal bundle method for nonsmooth nonconvex constrained optimization
    Najmeh Hoseini Monjezi
    S. Nobakhtian
    [J]. Journal of Global Optimization, 2021, 79 : 1 - 37
  • [47] Linearized Alternating Direction Method with Penalization for Nonconvex and Nonsmooth Optimization
    Wang, Yiyang
    Liu, Risheng
    Song, Xiaoliang
    Su, Zhixun
    [J]. THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 798 - 804
  • [48] The Projected Subgradient Method for Nonsmooth Convex Optimization in the Presence of Computational Errors
    Zaslavski, Alexander J.
    [J]. NUMERICAL FUNCTIONAL ANALYSIS AND OPTIMIZATION, 2010, 31 (05) : 616 - 633
  • [49] Incremental subgradient method for nonsmooth convex optimization with fixed point constraints
    Iiduka, H.
    [J]. OPTIMIZATION METHODS & SOFTWARE, 2016, 31 (05): : 931 - 951
  • [50] Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization
    Yu. M. Ermol'ev
    V. I. Norkin
    [J]. Cybernetics and Systems Analysis, 1998, 34 : 196 - 215