Expressive power of first-order recurrent neural networks determined by their attractor dynamics

被引:12
|
作者
Cabessa, Jeremie [1 ]
Villa, Alessandro E. P. [2 ]
机构
[1] Univ Paris 02, Lab Math Econ LEMMA, 4 Rue Blaise Desgoffe, F-75006 Paris, France
[2] Univ Lausanne, Fac Business & Econ HEC, Lab Neuroheurist NHRG, CH-1015 Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
Recurrent neural networks; Neural computation; Analog computation; Evolving systems; Learning; Attractors; Spatiotemporal patterns; Turing machines; Expressive power; omega-languages; SPATIOTEMPORAL FIRING PATTERNS; HIERARCHICAL-CLASSIFICATION; COMPUTATIONAL POWER; CHAOTIC ATTRACTORS; CORTEX;
D O I
10.1016/j.jcss.2016.04.006
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We provide a characterization of the expressive powers of several models of deterministic and nondeterministic first-order recurrent neural networks according to their attractor dynamics. The expressive power of neural nets is expressed as the topological complexity of their underlying neural co-languages, and refers to the ability of the networks to perform more or less complicated classification tasks via the manifestation of specific attractor dynamics. In this context, we prove that most neural models under consideration are strictly more powerful than Muller Turing machines. These results provide new insights into the computational capabilities of recurrent neural networks. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:1232 / 1250
页数:19
相关论文
共 50 条
  • [21] On the expressive power of first-order modal logic with two-dimensional operators
    Alexander W. Kocurek
    Synthese, 2018, 195 : 4373 - 4417
  • [22] Identification of Chaotic Dynamics in Jerky-Based Systems by Recurrent Wavelet First-Order Neural Networks with a Morlet Wavelet Activation Function
    Magallon-Garcia, Daniel Alejandro
    Ontanon-Garcia, Luis Javier
    Garcia-Lopez, Juan Hugo
    Huerta-Cuellar, Guillermo
    Soubervielle-Montalvo, Carlos
    AXIOMS, 2023, 12 (02)
  • [23] Local community detection as pattern restoration by attractor dynamics of recurrent neural networks
    Okamoto, Hiroshi
    BIOSYSTEMS, 2016, 146 : 85 - 90
  • [24] Attractor dynamics in feedforward neural networks
    Saul, LK
    Jordan, MI
    NEURAL COMPUTATION, 2000, 12 (06) : 1313 - 1335
  • [25] First-order Adversarial Vulnerability of Neural Networks and Input Dimension
    Simon-Gabriel, Carl-Johann
    Ollivier, Yann
    Schoelkopf, Bernhard
    Bottou, Leon
    Lopez-Paz, David
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [26] Perceptual simulations can be as expressive as first-order logic
    Hiroyuki Uchida
    Nicholas L. Cassimatis
    J. R. Scally
    Cognitive Processing, 2012, 13 : 361 - 369
  • [27] Perceptual simulations can be as expressive as first-order logic
    Uchida, Hiroyuki
    Cassimatis, Nicholas L.
    Scally, J. R.
    COGNITIVE PROCESSING, 2012, 13 (04) : 361 - 369
  • [28] Expressive probabilistic sampling in recurrent neural networks
    Chen, Shirui
    Jiang, Linxing Preston
    Rao, Rajesh P. N.
    Shea-Brown, Eric
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [29] On the Expressive Power of Deep Neural Networks
    Raghu, Maithra
    Poole, Ben
    Kleinberg, Jon
    Ganguli, Surya
    Dickstein, Jascha Sohl
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [30] Magnetic order and categorization in attractor neural networks
    Theumann, WK
    Erichsen, R
    JOURNAL OF MAGNETISM AND MAGNETIC MATERIALS, 2001, 226 (PART I) : 560 - 561