A modular architecture for transparent computation in recurrent neural networks

被引:11
|
作者
Carmantini, Giovanni S. [1 ]
Graben, Peter Beim [2 ]
Desroches, Mathieu [3 ]
Rodrigues, Serafim [1 ]
机构
[1] Univ Plymouth, Sch Comp & Math, Plymouth, Devon, England
[2] Humboldt Univ, Bernstein Ctr Computat Neurosci Berlin, Berlin, Germany
[3] Inria Sophia Antipolis Mediterranee, Valbonne, France
关键词
Automata Theory; Recurrent artificial neural networks; Representation theory; Nonlinear dynamical automata; Neural symbolic computation; Versatile shift; CENTRAL PATTERN GENERATORS; DYNAMICAL-SYSTEM; MODEL; POWER; REPRESENTATION; UNPREDICTABILITY; BACKPROPAGATION; UNDECIDABILITY; INFORMATION; LOCOMOTION;
D O I
10.1016/j.neunet.2016.09.001
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Godelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. (C) 2016 Elsevier Ltd. All rights reserved.
引用
收藏
页码:85 / 105
页数:21
相关论文
共 50 条
  • [41] Towards designing modular recurrent neural networks in learning protein secondary structures
    Babaei, Sepideh
    Geranmayeh, Amir
    Seyyedsalehi, Seyyed Ali
    EXPERT SYSTEMS WITH APPLICATIONS, 2012, 39 (06) : 6263 - 6274
  • [42] Cooperative recurrent modular neural networks for constrained optimization: a survey of models and applications
    Mohamed S. Kamel
    Youshen Xia
    Cognitive Neurodynamics, 2009, 3 : 47 - 81
  • [43] A modular architecture for hybrid VLSI neural networks and its application in a smart photosensor
    Djahanshahi, H
    Ahmadi, M
    Jullien, GA
    Miller, WC
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 868 - 873
  • [44] An architecture for emergency event prediction using LSTM recurrent neural networks
    Cortez, Bitzel
    Carrera, Berny
    Kim, Young-Jin
    Jung, Jae-Yoon
    EXPERT SYSTEMS WITH APPLICATIONS, 2018, 97 : 315 - 324
  • [45] Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons
    Buesing, Lars
    Bill, Johannes
    Nessler, Bernhard
    Maass, Wolfgang
    PLOS COMPUTATIONAL BIOLOGY, 2011, 7 (11)
  • [46] Implementation of Universal Computation via Small Recurrent Finite Precision Neural Networks
    Hobbs, J. Nicholas
    Siegelmann, Hava
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [47] Rectified Attention Gate Unit in Recurrent Neural Networks for Effective Attention Computation
    Ha, Manh-Hung
    Chen, Oscal Tzyh-Chiang
    2023 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP, SSP, 2023, : 81 - 84
  • [48] The modular neural predictive coding architecture
    Chetouani, M
    Gas, B
    Zarader, JL
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 452 - 456
  • [49] Bayesian computation in recurrent neural circuits
    Rao, RPN
    NEURAL COMPUTATION, 2004, 16 (01) : 1 - 38
  • [50] NEURAL NETWORKS WITH TRANSPARENT MEMORY
    FONTANARI, JF
    KOBERLE, R
    JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1988, 21 (04): : L259 - L262