Backpropagation through Time Algorithm for Training Recurrent Neural Networks using Variable Length Instances

被引:0
|
作者
Grau, Isel [1 ]
Napoles, Gonzalo [1 ]
Bonet, Isis [2 ]
Garcia, Maria Matilde [1 ]
机构
[1] Univ Cent Marta Abreu Las Villas, Ctr Estudios Informat, Las Villas, Cuba
[2] Escuela Ingn Antioquia, Antioquia, Colombia
来源
COMPUTACION Y SISTEMAS | 2013年 / 17卷 / 01期
关键词
Recurrent neural networks; backpropagation through time; sequence analysis; bioinformatics; artificial earthquakes;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial Neural Networks (ANNs) are grouped within connectionist techniques of Artificial Intelligence. In particular, Recurrent Neural Networks are a type of ANN which is widely used in signal reproduction tasks and sequence analysis, where causal relationships in time and space take place. On the other hand, in many problems of science and engineering, signals or sequences under analysis do not always have the same length, making it difficult to select a computational technique for information processing. This article presents a flexible implementation of Recurrent Neural Networks which allows designing the desired topology based on specific application problems. Furthermore, the proposed model is capable of learning to use knowledge bases with instances of variable length in an efficient manner. The performance of the suggested implementation is evaluated through a study case of bioinformatics sequence classification. We also mention its application in obtaining artificial earthquakes from seismic scenarios similar to Cuba.
引用
收藏
页码:15 / 24
页数:10
相关论文
共 50 条
  • [41] Backpropagation Through States: Training Neural Networks with Sequentially Semiseparable Weight Matrices
    Kissel, Matthias
    Gottwald, Martin
    Gjeroska, Biljana
    Paukner, Philipp
    Diepold, Klaus
    PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2022, 2022, 13566 : 476 - 487
  • [42] Backpropagation through nonlinear units for the all-optical training of neural networks
    XIANXIN GUO
    THOMAS DBARRETT
    ZHIMING MWANG
    AILVOVSKY
    Photonics Research, 2021, (03) : 262 - 271
  • [43] Active training of backpropagation neural networks using the learning by experimentation methodology
    Fu-Ren Lin
    Michael J. Shaw
    Annals of Operations Research, 1997, 75 : 105 - 122
  • [44] Training optronic convolutional neural networks on an optical system through backpropagation algorithms
    Gu, Ziyu
    Huang, Zicheng
    Gao, Yesheng
    Liu, Xingzhao
    OPTICS EXPRESS, 2022, 30 (11) : 19416 - 19440
  • [45] Active training of backpropagation neural networks using the learning by experimentation methodology
    Lin, FR
    Shaw, MJ
    ANNALS OF OPERATIONS RESEARCH, 1997, 75 (0) : 105 - 122
  • [46] Backpropagation algorithm for logic oriented neural networks
    Kamio, T
    Tanaka, S
    Morisue, M
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL II, 2000, : 123 - 128
  • [47] Neural networks with robust backpropagation learning algorithm
    Walczak, B
    ANALYTICA CHIMICA ACTA, 1996, 322 (1-2) : 21 - 29
  • [48] Reliable classification using neural networks: a genetic algorithm and backpropagation comparison
    Sexton, RS
    Dorsey, RE
    DECISION SUPPORT SYSTEMS, 2000, 30 (01) : 11 - 22
  • [49] Memory-Efficient Backpropagation for Recurrent Neural Networks
    Ayoub, Issa
    Al Osman, Hussein
    ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, 11489 : 274 - 283
  • [50] Stable dynamic backpropagation learning in recurrent neural networks
    Jin, LA
    Gupta, MM
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (06): : 1321 - 1334