Neural Networks Fail to Learn Periodic Functions and How to Fix It

被引:0
|
作者
Liu Ziyin [1 ]
Hartwig, Tilman [1 ,2 ,3 ]
Ueda, Masahito [1 ,2 ,4 ]
机构
[1] Univ Tokyo, Sch Sci, Dept Phys, Tokyo, Japan
[2] Univ Tokyo, Inst Phys Intelligence, Sch Sci, Tokyo, Japan
[3] Univ Tokyo, Kavli IPMU WPI, UTIAS, Tokyo, Japan
[4] RIKEN, CEMS, Tokyo, Japan
基金
日本学术振兴会;
关键词
ARIMA;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous literature offers limited clues on how to learn a periodic function using modern neural networks. We start with a study of the extrapolation properties of neural networks; we prove and demonstrate experimentally that the standard activations functions, such as ReLU, tanh, sigmoid, along with their variants, all fail to learn to extrapolate simple periodic functions. We hypothesize that this is due to their lack of a "periodic" inductive bias. As a fix of this problem, we propose a new activation, namely, x+sin(2) (x), which achieves the desired periodic inductive bias to learn a periodic function while maintaining a favorable optimization property of the ReLU-based activations. Experimentally, we apply the proposed method to temperature and financial data prediction.
引用
收藏
页数:12
相关论文
共 50 条
  • [32] Neural networks with periodic and monotonic activation functions:: a comparative study in classification problems
    Sopena, JM
    Romero, E
    Alquézar, R
    NINTH INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS (ICANN99), VOLS 1 AND 2, 1999, (470): : 323 - 328
  • [33] Silicon neural networks learn as they compute
    Paillet, G
    LASER FOCUS WORLD, 1996, 32 (08): : S17 - S19
  • [34] MICROCONTROLLERS LEARN TO EMBRACE NEURAL NETWORKS
    Edwards, Chris
    New Electronics, 2022, 55 (11): : 34 - 35
  • [35] Learning to learn by modular neural networks
    Sashima, A
    Hiraki, K
    PROCEEDINGS OF THE TWENTY-SECOND ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY, 2000, : 1051 - 1051
  • [36] Econets: Neural networks that learn in an environment
    Parisi, Domenico
    Cecconi, Federico
    Nolfi, Stefan
    NETWORK-COMPUTATION IN NEURAL SYSTEMS, 1990, 1 (02) : 149 - 168
  • [37] How Two-Layer Neural Networks Learn, One (Giant) Step at a Time
    Dandi, Yatin
    Krzakala, Florent
    Loureiro, Bruno
    Pesce, Luca
    Stephan, Ludovic
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 65
  • [38] How humans learn and represent networks
    Lynn, Christopher W.
    Bassett, Danielle S.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (47) : 29407 - 29415
  • [39] Basic processes in reading aloud: Where computational models fail, and how to fix them
    Besner, Derek
    O'Malley, Shannon
    Robidoux, Serje
    CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY-REVUE CANADIENNE DE PSYCHOLOGIE EXPERIMENTALE, 2010, 64 (04): : 303 - 303
  • [40] How to Learn Quickly: An investigation of how to optimally train deep neural networks and its implications for human learning
    Rickard, Luke
    2019 30TH IRISH SIGNALS AND SYSTEMS CONFERENCE (ISSC), 2019,