Interpreting Recurrent Neural Networks Behaviour via Excitable Network Attractors

被引:31
|
作者
Ceni, Andrea [1 ]
Ashwin, Peter [2 ]
Livi, Lorenzo [1 ,3 ,4 ]
机构
[1] Univ Exeter, Dept Comp Sci, Exeter EX4 4QF, Devon, England
[2] Univ Exeter, Dept Math, Exeter EX4 4QF, Devon, England
[3] Univ Manitoba, Dept Comp Sci, Winnipeg, MB R3T 2N2, Canada
[4] Univ Manitoba, Dept Math, Winnipeg, MB R3T 2N2, Canada
基金
英国工程与自然科学研究理事会;
关键词
Recurrent neural networks; Dynamical systems; Network attractors; Bifurcations; ECHO STATE PROPERTY; APPROXIMATION; ITINERANCY; DYNAMICS; SYSTEMS;
D O I
10.1007/s12559-019-09634-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning provides fundamental tools both for scientific research and for the development of technologies with significant impact on society. It provides methods that facilitate the discovery of regularities in data and that give predictions without explicit knowledge of the rules governing a system. However, a price is paid for exploiting such flexibility: machine learning methods are typically black boxes where it is difficult to fully understand what the machine is doing or how it is operating. This poses constraints on the applicability and explainability of such methods. Our research aims to open the black box of recurrent neural networks, an important family of neural networks used for processing sequential data. We propose a novel methodology that provides a mechanistic interpretation of behaviour when solving a computational task. Our methodology uses mathematical constructs called excitable network attractors, which are invariant sets in phase space composed of stable attractors and excitable connections between them. As the behaviour of recurrent neural networks depends both on training and on inputs to the system, we introduce an algorithm to extract network attractors directly from the trajectory of a neural network while solving tasks. Simulations conducted on a controlled benchmark task confirm the relevance of these attractors for interpreting the behaviour of recurrent neural networks, at least for tasks that involve learning a finite number of stable states and transitions between them.
引用
收藏
页码:330 / 356
页数:27
相关论文
共 50 条
  • [41] Convolutional Recurrent Neural Networks for Computer Network Analysis
    Nowak, Jakub
    Korytkowski, Marcin
    Scherer, Rafal
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: TEXT AND TIME SERIES, PT IV, 2019, 11730 : 747 - 757
  • [42] Network Traffic Prediction Using Recurrent Neural Networks
    Ramakrishnan, Nipun
    Soni, Tarun
    2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2018, : 187 - 193
  • [43] Graph partitioning via recurrent multivalued neural networks
    Mérida-Casermeiro, E
    López-Rodríguez, D
    COMPUTATIONAL INTELLIGENCE AND BIOINSPIRED SYSTEMS, PROCEEDINGS, 2005, 3512 : 1149 - 1156
  • [44] Reinforcement Learning via Recurrent Convolutional Neural Networks
    Shankar, Tanmay
    Dwivedy, Santosha K.
    Guha, Prithwijit
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 2592 - 2597
  • [45] Multitasking via baseline control in recurrent neural networks
    Ogawa, Shun
    Fumarola, Francesco
    Mazzucato, Luca
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2023, 120 (33)
  • [46] Detecting Anomalous Trajectories via Recurrent Neural Networks
    Ma, Cong
    Miao, Zhenjiang
    Li, Min
    Song, Shaoyue
    Yang, Ming-Hsuan
    COMPUTER VISION - ACCV 2018, PT IV, 2019, 11364 : 370 - 382
  • [47] Sentiment Classification Via Recurrent Convolutional Neural Networks
    Du, Changshun
    Huang, Lei
    2ND INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING, INFORMATION SCIENCE AND INTERNET TECHNOLOGY, CII 2017, 2017, : 308 - 316
  • [48] Image Captioning using Convolutional Neural Networks and Recurrent Neural Network
    Calvin, Rachel
    Suresh, Shravya
    2021 6TH INTERNATIONAL CONFERENCE FOR CONVERGENCE IN TECHNOLOGY (I2CT), 2021,
  • [49] Standard Neural Network Model for Robust Stabilization of Recurrent Neural Networks
    Wang, Shouguang
    Zhao, Liangxu
    Zhang, Jianhai
    2009 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND INTELLIGENT SYSTEMS, PROCEEDINGS, VOL 4, 2009, : 513 - +
  • [50] Neural Dynamics Discovery via Gaussian Process Recurrent Neural Networks
    She, Qi
    Wu, Anqi
    35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 454 - 464