Neural Networks Fail to Learn Periodic Functions and How to Fix It

被引:0
|
作者
Liu Ziyin [1 ]
Hartwig, Tilman [1 ,2 ,3 ]
Ueda, Masahito [1 ,2 ,4 ]
机构
[1] Univ Tokyo, Sch Sci, Dept Phys, Tokyo, Japan
[2] Univ Tokyo, Inst Phys Intelligence, Sch Sci, Tokyo, Japan
[3] Univ Tokyo, Kavli IPMU WPI, UTIAS, Tokyo, Japan
[4] RIKEN, CEMS, Tokyo, Japan
基金
日本学术振兴会;
关键词
ARIMA;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous literature offers limited clues on how to learn a periodic function using modern neural networks. We start with a study of the extrapolation properties of neural networks; we prove and demonstrate experimentally that the standard activations functions, such as ReLU, tanh, sigmoid, along with their variants, all fail to learn to extrapolate simple periodic functions. We hypothesize that this is due to their lack of a "periodic" inductive bias. As a fix of this problem, we propose a new activation, namely, x+sin(2) (x), which achieves the desired periodic inductive bias to learn a periodic function while maintaining a favorable optimization property of the ReLU-based activations. Experimentally, we apply the proposed method to temperature and financial data prediction.
引用
收藏
页数:12
相关论文
共 50 条
  • [11] How neural networks learn to classify chaotic time series
    Corbetta, Alessandro
    de Jong, Thomas Geert
    CHAOS, 2023, 33 (12)
  • [12] Understanding How Deep Neural Networks Learn Face Expressions
    Mousavi, Nima
    Siqueira, Henrique
    Barros, Pablo
    Fernandes, Bruno
    Wermter, Stefan
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 227 - 234
  • [13] Why diversity programs fail-and how to fix them
    Pena K.
    Hinsen K.
    Wilbur M.
    SMPTE Motion Imaging Journal, 2018, 127 (09): : 56 - 69
  • [14] Deep Neural Networks Learn Non-Smooth Functions Effectively
    Imaizumi, Masaaki
    Fukumizu, Kenji
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 869 - 878
  • [15] Periodic symmetric functions, serial addition, and multiplication with neural networks
    Cotofana, S
    Vassiliadis, S
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1998, 9 (06): : 1118 - 1128
  • [16] The training response law explains how deep neural networks learn
    Nakazato, Kenichi
    JOURNAL OF PHYSICS-COMPLEXITY, 2022, 3 (02):
  • [17] HOW CAN DEEP NEURAL NETWORKS FAIL EVEN WITHGLOBAL OPTIMA? br
    Guan, Qingguang
    INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, 2024, 21 (05) : 674 - 696
  • [18] Why current secure scan designs fail and how to fix them?
    Cui, Aijiao
    Luo, Yanhui
    Li, Huawei
    Qu, Gang
    INTEGRATION-THE VLSI JOURNAL, 2017, 56 : 105 - 114
  • [19] Saving the corporate board: Why boards fail and how to fix them
    Beam, HH
    ACADEMY OF MANAGEMENT EXECUTIVE, 2003, 17 (04): : 161 - 162
  • [20] Periodic Activation Functions in Memristor-based Analog Neural Networks
    Merkel, Cory
    Kudithipudi, Dhireesha
    Sereni, Nick
    2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2013,