Group-linking method: A unified benchmark for machine learning with recurrent neural network

被引:0
|
作者
Lin, Tsungnan [1 ,2 ]
Giles, C. Lee [3 ]
机构
[1] Natl Taiwan Univ, Dept Elect Engn, Taipei 10764, Taiwan
[2] Natl Taiwan Univ, Grad Inst Commun Engn, Taipei 10764, Taiwan
[3] Penn State Univ, eBusiness Res Ctr, University Pk, PA 16802 USA
关键词
recurrent neural networks; finite state machines; grammatical inference; NARX neural networks;
D O I
10.1093/ietfec/e90-a.12.2916
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a method (Group-Linking Method) that has control over the complexity of the sequential function to construct Finite Memory Machines with minimal order-the machines have the largest number of states based on their memory taps. Finding a machine with maximum number of states is a nontrivial problem because the total number of machines with memory order k is (256)(2k-2), a pretty large number. Based on the analysis of Group-Linking Method, it is shown that the amount of data necessary to reconstruct an FMM is the set of strings not longer than the depth of the machine plus one, which is significantly less than that required for traditional greedy-based machine learning algorithm. Group-Linking Method provides a useful systematic way of generating unified benchmarks to evaluate the capability of machine learning techniques. One example is to test the learning capability of recurrent neural networks. The problem of encoding finite state machines with recurrent neural networks has been extensively explored. However, the great representation power of those networks does not guarantee the solution in terms of learning exists. Previous learning benchmarks are shown to be not rich enough structurally in term of solutions in weight space. This set of benchmarks with great expressive power can serve as a convenient framework in which to study the learning and computation capabilities of various network models. A fundamental understanding of the capabilities of these networks will allow users to be able to select the most appropriate model for a given application.
引用
收藏
页码:2916 / 2929
页数:14
相关论文
共 50 条
  • [21] FOURIER NEURAL NETWORK FOR MACHINE LEARNING
    Liu, Shuang
    PROCEEDINGS OF 2013 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), VOLS 1-4, 2013, : 285 - 290
  • [22] Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks
    Brosch, Tobias
    Neumann, Heiko
    Roelfsema, Pieter R.
    PLOS COMPUTATIONAL BIOLOGY, 2015, 11 (10)
  • [23] Wireless Network Simulation to Create Machine Learning Benchmark Data
    Katzef, Marc
    Cullen, Andrew C.
    Alpcan, Tansu
    Leckie, Christopher
    Kopacz, Justin
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 6378 - 6383
  • [24] Multiobjective learning of complex recurrent neural network
    DrapaLa, Jaroslaw
    Brzostowski, Krzysztof
    Tomczak, Jakub
    Systems Science, 2009, 35 (04): : 27 - 37
  • [25] A Recursive Recurrent Neural Network for Statistical Machine Translation
    Liu, Shujie
    Yang, Nan
    Li, Mu
    Zhou, Ming
    PROCEEDINGS OF THE 52ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2014, : 1491 - 1500
  • [26] Parallel Implementations of Recurrent Neural Network Learning
    Lotric, Uros
    Dobnikar, Andrej
    ADAPTIVE AND NATURAL COMPUTING ALGORITHMS, 2009, 5495 : 99 - 108
  • [27] Learning Nonadjacent Dependencies with a Recurrent Neural Network
    Farkas, Igor
    ADVANCES IN NEURO-INFORMATION PROCESSING, PT II, 2009, 5507 : 292 - 299
  • [28] A recurrent fuzzy neural network: Learning and application
    Ballini, R
    Gomide, F
    VII BRAZILIAN SYMPOSIUM ON NEURAL NETWORKS, PROCEEDINGS, 2002, : 153 - 153
  • [29] Recurrent neural network learning for text routing
    Wermter, S
    Arevian, G
    Panchev, C
    NINTH INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS (ICANN99), VOLS 1 AND 2, 1999, (470): : 898 - 903
  • [30] Abstractive morphological learning with a recurrent neural network
    Malouf R.
    Morphology, 2017, 27 (4) : 431 - 458