Online Learning With Inexact Proximal Online Gradient Descent Algorithms

被引:59
|
作者
Dixit, Rishabh [1 ]
Bedi, Unlit Singh [1 ]
Tripathi, Ruchi [1 ]
Rajawat, Ketan [1 ]
机构
[1] Indian Inst Technol Kanpur, Dept Elect Engn, Kanpur 208016, Uttar Pradesh, India
关键词
Dynamic regret; gradient descent; online convex optimization; subspace tracking; SUBGRADIENT METHODS; STOCHASTIC METHODS; ROBUST PCA;
D O I
10.1109/TSP.2018.2890368
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We consider nondifferentiable dynamic optimization problems such as those arising in robotics and subspace tracking. Given the computational constraints and the time-varying nature of the problem, a low-complexity algorithm is desirable, while the accuracy of the solution may only increase slowly over time. We put forth the proximal online gradient descent (OGD) algorithm for tracking the optimum of a composite objective function comprising of a differentiable loss function and a nondifferentiable regularizer. An online learning framework is considered and the gradient of the loss function is allowed to be erroneous. Both, the gradient error as well as the dynamics of the function optimum or target are adversarial and the performance of the inexact proximal OGD is characterized in terms of its dynamic regret, expressed in terms of the cumulative error and path length of the target. The proposed inexact proximal OGD is generalized for application to large-scale problems where the loss function has a finite sum structure. In such cases, evaluation of the full gradient may not be viable and a variance reduced version is proposed that allows the component functions to be subsampled. The efficacy of the proposed algorithms is tested on the problem of formation control in robotics and on the dynamic foreground-background separation problem in video.
引用
收藏
页码:1338 / 1352
页数:15
相关论文
共 50 条
  • [1] Online Gradient Descent Learning Algorithms
    Yiming Ying
    Massimiliano Pontil
    [J]. Foundations of Computational Mathematics, 2008, 8 : 561 - 596
  • [2] Online gradient descent learning algorithms
    Ying, Yiming
    Pontil, Massimiliano
    [J]. FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2008, 8 (05) : 561 - 596
  • [3] Time Varying Optimization via Inexact Proximal Online Gradient Descent
    Dixit, Rishabh
    Bedi, Amrit Singh
    Tripathi, Ruchi
    Rajawat, Ketan
    [J]. 2018 CONFERENCE RECORD OF 52ND ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2018, : 759 - 763
  • [4] Online gradient descent algorithms for functional data learning
    Chen, Xiaming
    Tang, Bohao
    Fan, Jun
    Guo, Xin
    [J]. JOURNAL OF COMPLEXITY, 2022, 70
  • [5] LEARNING BY ONLINE GRADIENT DESCENT
    BIEHL, M
    SCHWARZE, H
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1995, 28 (03): : 643 - 656
  • [6] Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning
    Yang, Zhenhuan
    Lei, Yunwen
    Wang, Puyu
    Yang, Tianbao
    Ying, Yiming
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] Distributed and Inexact Proximal Gradient Method for Online Convex Optimization
    Bastianello, Nicola
    Dall'Anese, Emiliano
    [J]. 2021 EUROPEAN CONTROL CONFERENCE (ECC), 2021, : 2432 - 2437
  • [8] Tracking Moving Agents via Inexact Online Gradient Descent Algorithm
    Bedi, Amrit Singh
    Sarma, Paban
    Rajawat, Ketan
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (01) : 202 - 217
  • [9] Dual Space Gradient Descent for Online Learning
    Trung Le
    Tu Dinh Nguyen
    Vu Nguyen
    Dinh Phung
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [10] Online learning via congregational gradient descent
    Kim L. Blackmore
    Robert C. Williamson
    Iven M. Y. Mareels
    William A. Sethares
    [J]. Mathematics of Control, Signals and Systems, 1997, 10 : 331 - 363