Learning to Adaptively Scale Recurrent Neural Networks

被引:0
|
作者
Hu, Hao [1 ]
Wang, Liqiang [1 ]
Qi, Guo-Jun [2 ]
机构
[1] Univ Cent Florida, Orlando, FL 32816 USA
[2] Huawei Cloud, Shenzhen, Peoples R China
基金
美国国家科学基金会;
关键词
LONG-TERM DEPENDENCIES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in recurrent neural network (RNN) research have demonstrated the superiority of utilizing multiscale structures in learning temporal representations of time series. Currently, most of multiscale RNNs use fixed scales, which do not comply with the nature of dynamical temporal patterns among sequences. In this paper, we propose Adaptively Scaled Recurrent Neural Networks (ASRNN), a simple but efficient way to handle this problem. Instead of using predefined scales, ASRNNs are able to learn and adjust scales based on different temporal contexts, making them more flexible in modeling multiscale patterns. Compared with other multiscale RNNs, ASRNNs are bestowed upon dynamical scaling capabilities with much simpler structures, and are easy to be integrated with various RNN cells. The experiments on multiple sequence modeling tasks indicate ASRNNs can efficiently adapt scales based on different sequence contexts and yield better performances than baselines without dynamical scaling abilities.
引用
收藏
页码:3822 / 3829
页数:8
相关论文
共 50 条
  • [41] Learning to Learn and Compositionality with Deep Recurrent Neural Networks
    de Freitas, Nando
    KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 3 - 3
  • [42] Effect of complexity on learning ability of recurrent neural networks
    N. Honma
    K. Kitagawa
    K. Abe
    Artificial Life and Robotics, 1998, 2 (3) : 97 - 101
  • [43] Learning Statistical Scripts with LSTM Recurrent Neural Networks
    Pichotta, Karl
    Mooney, Raymond J.
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2800 - 2806
  • [44] A conjugate gradient learning algorithm for recurrent neural networks
    Chang, WF
    Mak, MW
    NEUROCOMPUTING, 1999, 24 (1-3) : 173 - 189
  • [45] Visual concept conjunction learning with recurrent neural networks
    Liang, Kongming
    Chang, Hong
    Shan, Shiguang
    Chen, Xilin
    NEUROCOMPUTING, 2020, 395 (395) : 229 - 236
  • [46] GENERALIZED SCHEME FOR OPTIMAL LEARNING IN RECURRENT NEURAL NETWORKS
    SHANMUKH, K
    VENKATESH, YV
    IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING, 1995, 142 (02): : 71 - 77
  • [47] Stable dynamic backpropagation learning in recurrent neural networks
    Jin, LA
    Gupta, MM
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (06): : 1321 - 1334
  • [48] Learning Visual Storylines with Skipping Recurrent Neural Networks
    Sigurdsson, Gunnar A.
    Chen, Xinlei
    Gupta, Abhinav
    COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 : 71 - 88
  • [49] Continual learning for recurrent neural networks: An empirical evaluation
    Cossu, Andrea
    Carta, Antonio
    Lomonaco, Vincenzo
    Bacciu, Davide
    NEURAL NETWORKS, 2021, 143 : 607 - 627
  • [50] Learning Password Modification Patterns with Recurrent Neural Networks
    Nosenko, Alex
    Cheng, Yuan
    Chen, Haiquan
    SECURE KNOWLEDGE MANAGEMENT IN THE ARTIFICIAL INTELLIGENCE ERA, 2022, 1549 : 110 - 129