On a Relationship between Integral Compensation and Stochastic Gradient Descent

被引:0
|
作者
Fujimoto, Yusuke [1 ]
Maruta, Ichiro [1 ]
Sugie, Toshiharu [1 ]
机构
[1] Kyoto Univ, Grad Sch Informat, Dept Syst Sci, Kyoto, Japan
关键词
Machine learning; disturbance rejection; stochastic gradient descent;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Disturbance rejection is a fundamental problem in control engineering, and there are many methods to achieve a good disturbance rejection. One of the standard methods for the disturbance rejection is to employ an integral compensator. This compensator integrates the error between the reference signal and the output, and use the integrated value with a specific coefficient as a compensating signal. In this work, we discuss the relationship between the machine learning theory and the integral compensation. We focus on single-input-single-output discrete-time time-invariant systems, and show that the integral compensation can be understood in the context of the machine learning in this case. In particular, the integral compensation is identical to the online optimization with the standard stochastic gradient descent [1]. The above idea gives two suggestions. First, the integral compensation may become faster by employing other types of stochastic gradient descent. Many algorithms of stochastic gradient descent have been proposed in the machine learning literature, and such algorithms may improve the classical integral compensation. Second, it may become possible to reject the time-invariant state-dependent disturbance. The modeling error is a typical example of such disturbance. By learning such a disturbance, the control performance for repetitive motions will be improved compared to the integral compensation. This presentation discusses the pros and cons of regarding the integral compensation as the stochastic gradient descent optimization.
引用
收藏
页码:343 / 343
页数:1
相关论文
共 50 条
  • [1] Asynchronous Stochastic Gradient Descent with Delay Compensation
    Zheng, Shuxin
    Meng, Qi
    Wang, Taifeng
    Chen, Wei
    Yu, Nenghai
    Ma, Zhi-Ming
    Liu, Tie-Yan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [2] Guided parallelized stochastic gradient descent for delay compensation
    Sharma, Anuraganand
    [J]. APPLIED SOFT COMPUTING, 2021, 102
  • [3] Relationship between performance of stochastic parallel gradient descent algorithm and distribution rule of deformable mirror
    Chen Hui-ying
    Wang Wei-bing
    Wang Ting-feng
    Guo Jin
    [J]. CHINESE OPTICS, 2016, 9 (04): : 432 - 438
  • [4] Implementation and performance of stochastic parallel gradient descent algorithm for atmospheric turbulence compensation
    Finney, Greg A.
    Persons, Christopher
    Henning, Stephan
    Hazen, Jessie
    Whitley, Daniel
    [J]. LASER RADAR TECHNOLOGY AND APPLICATIONS XIX; AND ATMOSPHERIC PROPAGATION XI, 2014, 9080
  • [5] CD-SGD: Distributed Stochastic Gradient Descent with Compression and Delay Compensation
    Yu, Enda
    Dong, Dezun
    Xu, Yemao
    Ouyang, Shuo
    Liao, Xiangke
    [J]. 50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, 2021,
  • [6] CP-SGD: Distributed stochastic gradient descent with compression and periodic compensation
    Yu, Enda
    Dong, Dezun
    Xu, Yemao
    Ouyang, Shuo
    Liao, Xiangke
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2022, 169 : 42 - 57
  • [7] Preconditioned Stochastic Gradient Descent
    Li, Xi-Lin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (05) : 1454 - 1466
  • [8] Unforgeability in Stochastic Gradient Descent
    Baluta, Teodora
    Nikolic, Ivica
    Jain, Racchit
    Aggarwal, Divesh
    Saxena, Prateek
    [J]. PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 1138 - 1152
  • [9] Stochastic gradient descent tricks
    Bottou, Léon
    [J]. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, 7700 LECTURE NO : 421 - 436
  • [10] Stochastic Reweighted Gradient Descent
    El Hanchi, Ayoub
    Stephens, David A.
    Maddison, Chris J.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,