Learning to Adaptively Scale Recurrent Neural Networks

被引:0
|
作者
Hu, Hao [1 ]
Wang, Liqiang [1 ]
Qi, Guo-Jun [2 ]
机构
[1] Univ Cent Florida, Orlando, FL 32816 USA
[2] Huawei Cloud, Shenzhen, Peoples R China
基金
美国国家科学基金会;
关键词
LONG-TERM DEPENDENCIES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in recurrent neural network (RNN) research have demonstrated the superiority of utilizing multiscale structures in learning temporal representations of time series. Currently, most of multiscale RNNs use fixed scales, which do not comply with the nature of dynamical temporal patterns among sequences. In this paper, we propose Adaptively Scaled Recurrent Neural Networks (ASRNN), a simple but efficient way to handle this problem. Instead of using predefined scales, ASRNNs are able to learn and adjust scales based on different temporal contexts, making them more flexible in modeling multiscale patterns. Compared with other multiscale RNNs, ASRNNs are bestowed upon dynamical scaling capabilities with much simpler structures, and are easy to be integrated with various RNN cells. The experiments on multiple sequence modeling tasks indicate ASRNNs can efficiently adapt scales based on different sequence contexts and yield better performances than baselines without dynamical scaling abilities.
引用
收藏
页码:3822 / 3829
页数:8
相关论文
共 50 条
  • [31] Equality index and learning in recurrent fuzzy neural networks
    Ballini, R
    Gomide, F
    PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1 AND 2, 2003, : 155 - 160
  • [32] Residual Recurrent Neural Networks for Learning Sequential Representations
    Yue, Boxuan
    Fu, Junwei
    Liang, Jun
    INFORMATION, 2018, 9 (03)
  • [33] Semantic learning in autonomously active recurrent neural networks
    Gros, Claudius
    Kaczor, Gregor
    LOGIC JOURNAL OF THE IGPL, 2010, 18 (05) : 686 - 704
  • [34] "FORCE" learning in recurrent neural networks as data assimilation
    Duane, Gregory S.
    CHAOS, 2017, 27 (12)
  • [35] Sequence Metric Learning as Synchronization of Recurrent Neural Networks
    Compagnon, Paul
    Lefebvre, Gregoire
    Duffner, Stefan
    Garcia, Christophe
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [36] Efficient Online Learning with Spiral Recurrent Neural Networks
    Sollacher, Rudolf
    Gao, Huaien
    2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 2551 - 2558
  • [37] Learning of Process Representations Using Recurrent Neural Networks
    Seeliger, Alexander
    Luettgen, Stefan
    Nolle, Timo
    Muehlhaeuser, Max
    ADVANCED INFORMATION SYSTEMS ENGINEERING (CAISE 2021), 2021, 12751 : 109 - 124
  • [38] Constrained Training of Recurrent Neural Networks for Automata Learning
    Aichernig, Bernhard K.
    Koenig, Sandra
    Mateis, Cristinel
    Pferscher, Andrea
    Schmidt, Dominik
    Tappler, Martin
    SOFTWARE ENGINEERING AND FORMAL METHODS, SEFM 2022, 2022, 13550 : 155 - 172
  • [39] Learning Topology and Dynamics of Large Recurrent Neural Networks
    She, Yiyuan
    He, Yuejia
    Wu, Dapeng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (22) : 5881 - 5891
  • [40] Reinforcement Learning via Recurrent Convolutional Neural Networks
    Shankar, Tanmay
    Dwivedy, Santosha K.
    Guha, Prithwijit
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 2592 - 2597