On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces

被引:2
|
作者
Jin, Bangti [1 ]
Kereta, Zeljko [2 ]
机构
[1] Chinese Univ Hong Kong, Dept Math, Shatin, Hong Kong, Peoples R China
[2] UCL, Dept Comp Sci, Gower St, London WC1E 6BT, England
来源
SIAM JOURNAL ON IMAGING SCIENCES | 2023年 / 16卷 / 02期
基金
英国工程与自然科学研究理事会;
关键词
stochastic gradient descent; Banach spaces; linear inverse problems; convergence rate; regularizing property; almost sure convergence; ILL-POSED PROBLEMS; REGULARIZATION; OPTIMIZATION; PARAMETERS; KACZMARZ; CHOICE;
D O I
10.1137/22M1518542
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work we consider stochastic gradient descent (SGD) for solving linear inverse problems in Banach spaces. SGD and its variants have been established as one of the most successful optimization methods in machine learning, imaging, and signal processing, to name a few. At each iteration SGD uses a single datum, or a small subset of data, resulting in highly scalable methods that are very attractive for large-scale inverse problems. Nonetheless, the theoretical analysis of SGD-based approaches for inverse problems has thus far been largely limited to Euclidean and Hilbert spaces. In this work we present a novel convergence analysis of SGD for linear inverse problems in general Banach spaces: we show the almost sure convergence of the iterates to the minimum norm solution and establish the regularizing property for suitable a priori stopping criteria. Numerical results are also presented to illustrate features of the approach.
引用
收藏
页码:671 / 705
页数:35
相关论文
共 50 条
  • [1] STOCHASTIC GRADIENT DESCENT FOR LINEAR INVERSE PROBLEMS IN HILBERT SPACES
    Lu, Shuai
    Mathe, Peter
    MATHEMATICS OF COMPUTATION, 2022, 91 (336) : 1763 - 1788
  • [2] On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems*
    Jin, Bangti
    Zhou, Zehui
    Zou, Jun
    SIAM-ASA JOURNAL ON UNCERTAINTY QUANTIFICATION, 2021, 9 (04): : 1553 - 1588
  • [3] Linear Convergence of Adaptive Stochastic Gradient Descent
    Xie, Yuege
    Wu, Xiaoxia
    Ward, Rachel
    arXiv, 2019,
  • [4] Linear Convergence of Adaptive Stochastic Gradient Descent
    Xie, Yuege
    Wu, Xiaoxia
    Ward, Rachel
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108
  • [5] Stochastic mirror descent method for linear ill-posed problems in Banach spaces
    Jin, Qinian
    Lu, Xiliang
    Zhang, Liuying
    INVERSE PROBLEMS, 2023, 39 (06)
  • [6] Generic convergence of descent methods in Banach spaces
    Reich, S
    Zaslavski, AJ
    MATHEMATICS OF OPERATIONS RESEARCH, 2000, 25 (02) : 231 - 242
  • [7] Convergence of Stochastic Gradient Descent for PCA
    Shamir, Ohad
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [8] ON THE CONVERGENCE OF STOCHASTIC GRADIENT DESCENT FOR NONLINEAR ILL-POSED PROBLEMS
    Jin, Bangti
    Zhou, Zehui
    Zou, Jun
    SIAM JOURNAL ON OPTIMIZATION, 2020, 30 (02) : 1421 - 1450
  • [9] A linear regularization scheme for inverse problems with unbounded linear operators on Banach spaces
    Kohr, Holger
    INVERSE PROBLEMS, 2013, 29 (06)
  • [10] On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems
    Mertikopoulos, Panayotis
    Hallak, Nadav
    Kavis, Ali
    Cevher, Volkan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33