On the Initialization for Convex-Concave Min-max Problems

被引:0
|
作者
Liu, Mingrui [1 ]
Orabona, Francesco [2 ]
机构
[1] George Mason Univ, Dept Comp Sci, Fairfax, VA 22030 USA
[2] Boston Univ, Elect & Comp Engn, Boston, MA 02215 USA
基金
美国国家科学基金会;
关键词
Convex-concave; Min-max; Initialization; Fast Rates; Parameter-Free; VARIATIONAL-INEQUALITIES; EXTRAGRADIENT METHOD; SADDLE; MINIMIZATION; CONVERGENCE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convex-concave min-max problems are ubiquitous in machine learning, and people usually utilize first-order methods (e.g., gradient descent ascent) to find the optimal solution. One feature which separates convex-concave min-max problems from convex minimization problems is that the best known convergence rates for min-max problems have an explicit dependence on the size of the domain, rather than on the distance between initial point and the optimal solution. This means that the convergence speed does not have any improvement even if the algorithm starts from the optimal solution, and hence, is oblivious to the initialization. Here, we show that strict-convexity-strict-concavity is sufficient to get the convergence rate to depend on the initialization. We also show how different algorithms can asymptotically achieve initialization-dependent convergence rates on this class of functions. Furthermore, we show that the so-called "parameter-free" algorithms allow to achieve improved initialization-dependent asymptotic rates without any learning rate to tune. In addition, we utilize this particular parameter-free algorithm as a subroutine to design a new algorithm, which achieves a novel non-asymptotic fast rate for strictly-convex-strictly-concave min-max problems with a growth condition and Holder continuous solution mapping. Experiments are conducted to verify our theoretical findings and demonstrate the effectiveness of the proposed algorithms.
引用
收藏
页数:25
相关论文
共 50 条
  • [1] Convex-Concave Min-Max Stackelberg Games
    Goktas, Denizalp
    Greenwald, Amy
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] A delayed subgradient method for nonsmooth convex-concave min-max optimization problems
    Arunrat, Tipsuda
    Nimana, Nimit
    [J]. RESULTS IN CONTROL AND OPTIMIZATION, 2023, 12
  • [3] HIGHER-ORDER METHODS FOR CONVEX-CONCAVE MIN-MAX OPTIMIZATION AND MONOTONE VARIATIONAL INEQUALITIES
    Bullins, Brian
    Lai, Kevin A.
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2022, 32 (03) : 2208 - 2229
  • [4] Last Iterate Convergence in No-regret Learning: Constrained Min-max Optimization for Convex-concave Landscapes
    Lei, Qi
    Nagarajan, Sai Ganesh
    Panageas, Ioannis
    Wang, Xiao
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [5] A RECURSIVE ALGORITHM FOR A CLASS OF CONVEX MIN-MAX PROBLEMS
    SEKITANI, K
    TAMURA, A
    YAMAMOTO, Y
    [J]. ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH, 1993, 10 (01) : 93 - 108
  • [6] First-order convergence theory for weakly-convex-weakly-concave min-max problems
    Liu, Mingrui
    Rafique, Hassan
    Lin, Qihang
    Yang, Tianbao
    [J]. Journal of Machine Learning Research, 2021, 22 : 1 - 34
  • [7] First-order Convergence Theory for Weakly-Convex-Weakly-Concave Min-max Problems
    Liu, Mingrui
    Rafique, Hassan
    Lin, Qihang
    Yang, Tianbao
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [8] Complexity of the min-max and min-max regret assignment problems
    Aissi, H
    Bazgan, C
    Vanderpooten, D
    [J]. OPERATIONS RESEARCH LETTERS, 2005, 33 (06) : 634 - 640
  • [9] An incremental method for solving convex finite min-max problems
    Gaudioso, M
    Giallombardo, G
    Miglionico, G
    [J]. MATHEMATICS OF OPERATIONS RESEARCH, 2006, 31 (01) : 173 - 187
  • [10] Dynamic min-max problems
    Schwiegelshohn, U
    Thiele, L
    [J]. DISCRETE EVENT DYNAMIC SYSTEMS-THEORY AND APPLICATIONS, 1999, 9 (02): : 111 - 134