Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-Max Optimization

被引:0
|
作者
Zheng, Tianqi [1 ]
Loizou, Nicolas [2 ]
You, Pengcheng [3 ]
Mallada, Enrique [1 ]
机构
[1] Johns Hopkins Univ, Dept Elect & Comp Engn, Baltimore, MD 21218 USA
[2] Johns Hopkins Univ, Dept Appl Math & Stat, Baltimore, MD 21218 USA
[3] Peking Univ, Dept Ind Engn & Management, Beijing 100871, Peoples R China
来源
关键词
Optimization; optimization algorithms; Lyapunov methods; UNIFIED ANALYSIS;
D O I
10.1109/LCSYS.2024.3413004
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gradient Descent Ascent (GDA) methods for min-max optimization problems typically produce oscillatory behavior that can lead to instability, e.g., in bilinear settings. To address this problem, we introduce a dissipation term into the GDA updates to dampen these oscillations. The proposed Dissipative GDA (DGDA) method can be seen as performing standard GDA on a state-augmented and regularized saddle function that does not strictly introduce additional convexity/concavity. We theoretically show the linear convergence of DGDA in the bilinear and strongly convex-strongly concave settings and assess its performance by comparing DGDA with other methods such as GDA, Extra-Gradient (EG), and Optimistic GDA. Our findings demonstrate that DGDA surpasses these methods, achieving superior convergence rates. We support our claims with two numerical examples that showcase DGDA's effectiveness in solving saddle point problems.
引用
收藏
页码:2009 / 2014
页数:6
相关论文
共 50 条
  • [1] Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
    Flokas, Lampros
    Vlatakis-Gkaragkounis, Emmanouil V.
    Piliouras, Georgios
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [2] Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization
    Yan, Yan
    Xu, Yi
    Lin, Qihang
    Liu, Wei
    Yang, Tianbao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization
    Daskalakis, Constantinos
    Panageas, Ioannis
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [4] Alternating Gradient Descent Ascent for Nonconvex Min-Max Problems in Robust Learning and GANs
    Lu, Songtao
    Singh, Rahul
    Chen, Xiangyi
    Chen, Yongxin
    Hong, Mingyi
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 680 - 684
  • [5] Convergence Rates of Gradient Descent-Ascent Dynamics Under Delays in Solving Nonconvex Min-Max Optimization
    Do, Duy Anh
    Doan, Thinh T.
    2024 European Control Conference, ECC 2024, 2024, : 2748 - 2753
  • [6] A SIMPLE ALGORITHM FOR MIN-MAX NETWORK OPTIMIZATION
    DIMAIO, B
    SORBELLO, F
    ALTA FREQUENZA, 1988, 57 (05): : 259 - 265
  • [7] A min-max optimization algorithm for global active acoustic radiation control
    Han, Rong
    Wu, Ming
    Chi, Kexun
    Yin, Lan
    Sun, Hongling
    Yang, Jun
    2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 1815 - 1818
  • [8] Last-Iterate Convergence Rates for Min-Max Optimization: Convergence of Hamiltonian Gradient Descent and Consensus Optimization
    Abernethy, Jacob
    Lai, Kevin A.
    Wibisono, Andre
    ALGORITHMIC LEARNING THEORY, VOL 132, 2021, 132
  • [9] Convergence Theory of a SAA Method for Min-max Stochastic Optimization Problems
    Nie, Yunyun
    SENSORS, MEASUREMENT AND INTELLIGENT MATERIALS, PTS 1-4, 2013, 303-306 : 1319 - 1322
  • [10] Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for Solving Nonconvex Min-Max Problems
    Doan, Thinh T.
    Proceedings of Machine Learning Research, 2022, 168 : 192 - 206