Globally convergent Newton-type methods for multiobjective optimization

被引:16
|
作者
Goncalves, M. L. N. [1 ]
Lima, F. S. [1 ]
Prudente, L. F. [1 ]
机构
[1] Univ Fed Goias, IME, Campus 2,Caixa Postal 131, BR-74001970 Goiania, Go, Brazil
关键词
Multiobjective optimization; Newton method; Global convergence; Numerical experiments; PROJECTED GRADIENT-METHOD; VECTOR OPTIMIZATION; LINE SEARCHES; PARETO SET;
D O I
10.1007/s10589-022-00414-7
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We propose two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas the second uses second-order information of the objective functions with ingredients of the steepest descent method. One of the key points of our approaches is to impose some safeguard strategies on the search directions. These strategies are associated to the conditions that prevent, at each iteration, the search direction to be too close to orthogonality with the multiobjective steepest descent direction and require a proportionality between the lengths of such directions. In order to fulfill the demanded safeguard conditions on the search directions of Newton-type methods, we adopt the technique in which the Hessians are modified, if necessary, by adding multiples of the identity. For our first Newton-type method, it is also shown that, under convexity assumptions, the local superlinear rate of convergence (or quadratic, in the case where the Hessians of the objectives are Lipschitz continuous) to a local efficient point of the given problem is recovered. The global convergences of the aforementioned methods are based, first, on presenting and establishing the global convergence of a general algorithm and, then, showing that the new methods fall in this general algorithm. Numerical experiments illustrating the practical advantages of the proposed Newton-type schemes are presented.
引用
收藏
页码:403 / 434
页数:32
相关论文
共 50 条
  • [31] A cubically convergent Newton-type method under weak conditions
    Fang, Liang
    He, Guoping
    Hu, Zhongyong
    [J]. JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2008, 220 (1-2) : 409 - 412
  • [32] INEXACT NEWTON-TYPE OPTIMIZATION WITH ITERATED SENSITIVITIES
    Quirynen, Rien
    Gros, Sebastien
    Diehl, Moritz
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2018, 28 (01) : 74 - 95
  • [33] DINO: Distributed Newton-Type Optimization Method
    Crane, Rixon
    Roosta, Fred
    [J]. 25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [34] A continuous Newton-type method for unconstrained optimization
    Zhang, Lei-Hong
    Kelley, C. T.
    Liao, Li-Zhi
    [J]. PACIFIC JOURNAL OF OPTIMIZATION, 2008, 4 (02): : 259 - 277
  • [35] Newton-type methods for simultaneous matrix diagonalization
    Khouja, Rima
    Mourrain, Bernard
    Yakoubsohn, Jean-Claude
    [J]. CALCOLO, 2022, 59 (04)
  • [36] ON A GENERAL ITERATIVE SCHEME FOR NEWTON-TYPE METHODS
    MORET, I
    [J]. NUMERICAL FUNCTIONAL ANALYSIS AND OPTIMIZATION, 1987, 9 (11-12) : 1115 - 1137
  • [37] CONSISTENT APPROXIMATIONS IN NEWTON-TYPE DECOMPOSITION METHODS
    SCHMIDT, JW
    HOYER, W
    HAUFE, C
    [J]. NUMERISCHE MATHEMATIK, 1985, 47 (03) : 413 - 425
  • [38] Lifted Newton-Type Optimization for Pseudospectral Methods in Nonlinear Model Predictive Control
    Quirynen, Rien
    Diehl, Moritz
    [J]. 2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 2018, : 3927 - 3932
  • [39] DINO: Distributed Newton-Type Optimization Method
    Crane, Rixon
    Roosta, Fred
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [40] Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints
    A. F. Izmailov
    M. V. Solodov
    [J]. Computational Optimization and Applications, 2009, 42 : 231 - 264