Interpreting Recurrent Neural Networks Behaviour via Excitable Network Attractors

被引:31
|
作者
Ceni, Andrea [1 ]
Ashwin, Peter [2 ]
Livi, Lorenzo [1 ,3 ,4 ]
机构
[1] Univ Exeter, Dept Comp Sci, Exeter EX4 4QF, Devon, England
[2] Univ Exeter, Dept Math, Exeter EX4 4QF, Devon, England
[3] Univ Manitoba, Dept Comp Sci, Winnipeg, MB R3T 2N2, Canada
[4] Univ Manitoba, Dept Math, Winnipeg, MB R3T 2N2, Canada
基金
英国工程与自然科学研究理事会;
关键词
Recurrent neural networks; Dynamical systems; Network attractors; Bifurcations; ECHO STATE PROPERTY; APPROXIMATION; ITINERANCY; DYNAMICS; SYSTEMS;
D O I
10.1007/s12559-019-09634-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning provides fundamental tools both for scientific research and for the development of technologies with significant impact on society. It provides methods that facilitate the discovery of regularities in data and that give predictions without explicit knowledge of the rules governing a system. However, a price is paid for exploiting such flexibility: machine learning methods are typically black boxes where it is difficult to fully understand what the machine is doing or how it is operating. This poses constraints on the applicability and explainability of such methods. Our research aims to open the black box of recurrent neural networks, an important family of neural networks used for processing sequential data. We propose a novel methodology that provides a mechanistic interpretation of behaviour when solving a computational task. Our methodology uses mathematical constructs called excitable network attractors, which are invariant sets in phase space composed of stable attractors and excitable connections between them. As the behaviour of recurrent neural networks depends both on training and on inputs to the system, we introduce an algorithm to extract network attractors directly from the trajectory of a neural network while solving tasks. Simulations conducted on a controlled benchmark task confirm the relevance of these attractors for interpreting the behaviour of recurrent neural networks, at least for tasks that involve learning a finite number of stable states and transitions between them.
引用
收藏
页码:330 / 356
页数:27
相关论文
共 50 条
  • [21] Continuous Attractors of Lotka-Volterra Recurrent Neural Networks with Infinite Neurons
    Yu, Jiali
    Yi, Zhang
    Zhou, Jiliu
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (10): : 1690 - 1695
  • [22] Continuous Recurrent Neural Networks Based on Function SatlinsCoexistence of Multiple Continuous Attractors
    Yue Huang
    Jiali Yu
    Jinsong Leng
    Bisen Liu
    Zhang Yi
    Neural Processing Letters, 2022, 54 : 1293 - 1315
  • [23] Interpreting a recurrent neural network's predictions of ICU mortality risk
    Ho, Long, V
    Aczon, Melissa
    Ledbetter, David
    Wetzel, Randall
    JOURNAL OF BIOMEDICAL INFORMATICS, 2021, 114
  • [24] Network of Recurrent Neural Networks: Design for Emergence
    Wang, Chaoming
    Zeng, Yi
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 89 - 102
  • [25] Network restoration using recurrent neural networks
    Sri Venkateswara Univ, Bangalore, India
    Int J Network Manage, 5 (264-273):
  • [26] Inferring the Dynamics of Gene Regulatory Networks via Optimized Recurrent Neural Network and Dynamic Bayesian Network
    Akutekwe, Arinze
    Seker, Huseyin
    2015 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE IN BIOINFORMATICS AND COMPUTATIONAL BIOLOGY (CIBCB), 2015, : 374 - 381
  • [27] Identifying Brain Networks of Multiple Time Scales via Deep Recurrent Neural Network
    Cui, Yan
    Zhao, Shijie
    Wang, Han
    Xie, Li
    Chen, Yaowu
    Han, Junwei
    Guo, Lei
    Zhou, Fan
    Liu, Tianming
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, PT III, 2018, 11072 : 284 - 292
  • [28] Identifying Brain Networks at Multiple Time Scales via Deep Recurrent Neural Network
    Cui, Yan
    Zhao, Shijie
    Wang, Han
    Xie, Li
    Chen, Yaowu
    Han, Junwei
    Guo, Lei
    Zhou, Fan
    Liu, Tianming
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2019, 23 (06) : 2515 - 2525
  • [29] PROBING THE ATTRACTORS IN NEURAL NETWORKS
    WONG, KYM
    PHYSICA A, 1993, 200 (1-4): : 619 - 627
  • [30] Interpreting Layered Neural Networks via Hierarchical Modular Representation
    Watanabe, Chihiro
    NEURAL INFORMATION PROCESSING, ICONIP 2019, PT V, 2019, 1143 : 376 - 388