A Method for Transforming Non-Convex Optimization Problem to Distributed Form

被引:1
|
作者
Khamisov, Oleg O. [1 ]
Khamisov, Oleg V. [1 ]
Ganchev, Todor D. [2 ]
Semenkin, Eugene S. [3 ]
机构
[1] Melentiev Energy Syst Inst, Depertment Appl Math, Irkutsk 664033, Russia
[2] Tech Univ Varna, Dept Comp Sci & Engn, Varna 9010, Bulgaria
[3] Baumann Moscow State Tech Univ, Sci & Educ Ctr Artificial Intelligence Technol, Moscow 105005, Russia
关键词
distributed optimization; non-convex optimization; gradient descent; Newton's method; REAL-TIME; DECOMPOSITION; ALGORITHM; DESIGN;
D O I
10.3390/math12172796
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
We propose a novel distributed method for non-convex optimization problems with coupling equality and inequality constraints. This method transforms the optimization problem into a specific form to allow distributed implementation of modified gradient descent and Newton's methods so that they operate as if they were distributed. We demonstrate that for the proposed distributed method: (i) communications are significantly less time-consuming than oracle calls, (ii) its convergence rate is equivalent to the convergence of Newton's method concerning oracle calls, and (iii) for the cases when oracle calls are more expensive than communication between agents, the transition from a centralized to a distributed paradigm does not significantly affect computational time. The proposed method is applicable when the objective function is twice differentiable and constraints are differentiable, which holds for a wide range of machine learning methods and optimization setups.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Riemannian Stochastic Recursive Momentum Method for non-Convex Optimization
    Han, Andi
    Gao, Junbin
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2505 - 2511
  • [22] Conditions for linear convergence of the gradient method for non-convex optimization
    Abbaszadehpeivasti, Hadi
    de Klerk, Etienne
    Zamani, Moslem
    OPTIMIZATION LETTERS, 2023, 17 (05) : 1105 - 1125
  • [23] Conditions for linear convergence of the gradient method for non-convex optimization
    Hadi Abbaszadehpeivasti
    Etienne de Klerk
    Moslem Zamani
    Optimization Letters, 2023, 17 : 1105 - 1125
  • [24] A new non-convex sparse optimization method for image restoration
    Peng Wu
    Dequan Li
    Signal, Image and Video Processing, 2023, 17 : 3829 - 3836
  • [25] A new non-convex sparse optimization method for image restoration
    Wu, Peng
    Li, Dequan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (07) : 3829 - 3836
  • [26] A New Filled Function Method for the Non-convex Global Optimization
    Qiao Bao-Ming
    Gao Le
    Wei Fei
    2017 13TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS), 2017, : 219 - 223
  • [27] Optimality and duality for vector optimization problem with non-convex feasible set
    Suneja, S. K.
    Sharma, Sunila
    Yadav, Priyanka
    OPSEARCH, 2020, 57 (01) : 1 - 12
  • [28] An optimization problem with a separable non-convex objective function and a linear constraint
    Babayev, DA
    Bell, GI
    JOURNAL OF HEURISTICS, 2001, 7 (02) : 169 - 184
  • [29] SDP application on portfolio optimization problem with non-convex quadratic constraints
    Odintsov, Kirill
    MATHEMATICAL METHODS IN ECONOMICS (MME 2014), 2014, : 715 - 720
  • [30] An Optimization Problem with a Separable Non-Convex Objective Function and a Linear Constraint
    Djangir A. Babayev
    George I. Bell
    Journal of Heuristics, 2001, 7 : 169 - 184