The Chaotic Nature of Faster Gradient Descent Methods

被引:0
|
作者
Kees van den Doel
Uri Ascher
机构
[1] University of British Columbia,Department of Computer Science
来源
关键词
Gradient descent; Iterative methods; Chaos; Dynamical systems;
D O I
暂无
中图分类号
学科分类号
摘要
The steepest descent method for large linear systems is well-known to often converge very slowly, with the number of iterations required being about the same as that obtained by utilizing a gradient descent method with the best constant step size and growing proportionally to the condition number. Faster gradient descent methods must occasionally resort to significantly larger step sizes, which in turn yields a rather non-monotone decrease pattern in the residual vector norm.
引用
收藏
页码:560 / 581
页数:21
相关论文
共 50 条
  • [1] The Chaotic Nature of Faster Gradient Descent Methods
    van den Doel, Kees
    Ascher, Uri
    [J]. JOURNAL OF SCIENTIFIC COMPUTING, 2012, 51 (03) : 560 - 581
  • [2] Gradient-descent methods for parameter estimation in chaotic systems
    Mariño, IP
    Míguez, J
    [J]. ISPA 2005: PROCEEDINGS OF THE 4TH INTERNATIONAL SYMPOSIUM ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS, 2005, : 440 - 445
  • [3] On Faster Convergence of Scaled Sign Gradient Descent
    Li, Xiuxian
    Lin, Kuo-Yi
    Li, Li
    Hong, Yiguang
    Chen, Jie
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (02) : 1732 - 1741
  • [4] Faster Gradient Descent and the Efficient Recovery of Images
    Huang H.
    Ascher U.
    [J]. Vietnam Journal of Mathematics, 2014, 42 (1) : 115 - 131
  • [5] Finding the chaotic synchronizing state with gradient descent algorithm
    Chen, JY
    Wong, KW
    Shuai, JW
    [J]. PHYSICS LETTERS A, 1999, 263 (4-6) : 315 - 322
  • [6] Explicit stabilised gradient descent for faster strongly convex optimisation
    Armin Eftekhari
    Bart Vandereycken
    Gilles Vilmart
    Konstantinos C. Zygalakis
    [J]. BIT Numerical Mathematics, 2021, 61 : 119 - 139
  • [7] Finding Approximate Local Minima Faster than Gradient Descent
    Agarwal, Naman
    Allen-Zhu, Zeyuan
    Bullins, Brian
    Hazan, Elad
    Ma, Tengyu
    [J]. STOC'17: PROCEEDINGS OF THE 49TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING, 2017, : 1195 - 1199
  • [8] Explicit stabilised gradient descent for faster strongly convex optimisation
    Eftekhari, Armin
    Vandereycken, Bart
    Vilmart, Gilles
    Zygalakis, Konstantinos C.
    [J]. BIT NUMERICAL MATHEMATICS, 2021, 61 (01) : 119 - 139
  • [9] Train faster, generalize better: Stability of stochastic gradient descent
    Hardt, Moritz
    Recht, Benjamin
    Singer, Yoram
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [10] MULTIPLICATIVE PARAMETERS IN GRADIENT DESCENT METHODS
    Stanimirovic, Predrag
    Miladinovic, Marko
    Djordjevic, Snezana
    [J]. FILOMAT, 2009, 23 (03) : 23 - 36