Reliable Localized On-line Learning in Non-stationary Environments

被引:0
|
作者
Buschermoehle, Andreas [1 ]
Brockmann, Werner [1 ]
机构
[1] Univ Osnabruck, Smart Embedded Syst Grp, Osnabruck, Germany
关键词
PERCEPTRON; DESCENT; MODEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
On-line learning allows to adapt to changing non-stationary environments. But typically with on-line learning a hypothesis of the data relation is adapted based on a stream of single local training examples, continuously changing the global input-output relation. Hence with these single examples the whole hypothesis is revised incrementally, which might be harmful to the overall predictive quality of the learned model. Nevertheless, for a reliable adaptation, the learned model must yield good predictions in every step. Therefor, the IRMA approach to online learning enables an adaptation that reliably incorporates a new example with a stringent local, but minimal global influence on the input-output relation. The main contribution of this paper is twofold. First, it presents an extension of IRMA regarding the setup of the stiffness, i.e. its hyper-parameter. Second, the IRMA approach is investigated for the first time on a non-trivial real-world application in a non-stationary environment. It is compared with state of the art algorithms on predicting future electric loads in a power grid where a continuous adaptation is necessary to adapt to season and weather conditions. The results show that the performance is increased significantly by IRMA.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Adaptive and on-line learning in non-stationary environments
    Lughofer, Edwin
    Sayed-Mouchaweh, Moamar
    [J]. EVOLVING SYSTEMS, 2015, 6 (02) : 75 - 77
  • [2] On-line color calibration in non-stationary environments
    Anzani, Federico
    Bosisio, Daniele
    Matteucci, Matteo
    Sorrenti, Domenico G.
    [J]. ROBOCUP 2005: ROBOT SOCCER WORLD CUP IX, 2006, 4020 : 396 - 407
  • [3] On-line compensation for non-stationary noise
    Barreaud, V
    Illina, I
    Fohr, D
    [J]. ASRU'03: 2003 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING ASRU '03, 2003, : 375 - 380
  • [4] Social Learning in non-stationary environments
    Boursier, Etienne
    Perchet, Vianney
    Scarsini, Marco
    [J]. INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 167, 2022, 167
  • [5] On-line adaptive algorithms in non-stationary environments using a modified conjugate gradient approach
    Cichocki, A
    Orsier, B
    Back, A
    Amari, S
    [J]. NEURAL NETWORKS FOR SIGNAL PROCESSING VII, 1997, : 316 - 325
  • [6] Learning User Preferences in Non-Stationary Environments
    Huleihel, Wasim
    Pal, Soumyabrata
    Shayevitz, Ofer
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [7] Towards Reinforcement Learning for Non-stationary Environments
    Dal Toe, Sebastian Gregory
    Tiddeman, Bernard
    Mac Parthalain, Neil
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS, UKCI 2023, 2024, 1453 : 41 - 52
  • [8] Reinforcement learning algorithm for non-stationary environments
    Padakandla, Sindhu
    Prabuchandran, K. J.
    Bhatnagar, Shalabh
    [J]. APPLIED INTELLIGENCE, 2020, 50 (11) : 3590 - 3606
  • [9] Learning to negotiate optimally in non-stationary environments
    Narayanan, Vidya
    Jennings, Nicholas R.
    [J]. COOPERATIVE INFORMATION AGENTS X, PROCEEDINGS, 2006, 4149 : 288 - 300
  • [10] Reinforcement learning algorithm for non-stationary environments
    Sindhu Padakandla
    Prabuchandran K. J.
    Shalabh Bhatnagar
    [J]. Applied Intelligence, 2020, 50 : 3590 - 3606