TAKING LAWS OUT OF TRAINED NEURAL NETWORKS

被引:0
|
作者
Majewski, Jaroslaw [1 ]
Wojtyna, Ryszard [1 ]
机构
[1] Univ Technol & Life Sci, Fac Telecommun & Elect Engn, PL-85796 Bydgoszcz, Poland
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, a problem of discovering numeric laws governing a trained neural network is considered. We propose new multilayer perceptrons implementing fractional rational functions, i.e. functions expressed as ratio of two polynomials of any order with a given number of components in the function numerator and denominator. Our networks can be utilized not only for the function implementation. They can also be used to extract knowledge embedded in the trained network This is performed during the training process. The extracted laws, underlying the network operation, are expressed in the symbolic, fractional-rational-function form. Our networks provide information about the function parameters. The extraction ability results from applying proper activation functions in different perceptron layers, i.e. functions of exp(.), ln(.), (.)(-1) and/or (.)(2) types. Both theoretical considerations and simulation results are presented to illustrate properties of our networks.
引用
收藏
页码:21 / 24
页数:4
相关论文
共 50 条
  • [11] Local Dynamics in Trained Recurrent Neural Networks
    Rivkind, Alexander
    Barak, Omri
    PHYSICAL REVIEW LETTERS, 2017, 118 (25)
  • [12] Deep physical neural networks trained with backpropagation
    Wright, Logan G.
    Onodera, Tatsuhiro
    Stein, Martin M.
    Wang, Tianyu
    Schachter, Darren T.
    Hu, Zoey
    McMahon, Peter L.
    NATURE, 2022, 601 (7894) : 549 - +
  • [13] Deep physical neural networks trained with backpropagation
    Logan G. Wright
    Tatsuhiro Onodera
    Martin M. Stein
    Tianyu Wang
    Darren T. Schachter
    Zoey Hu
    Peter L. McMahon
    Nature, 2022, 601 : 549 - 555
  • [14] Optimization with neural networks trained by evolutionary algorithms
    Velazco, MI
    Lyra, C
    PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 1516 - 1521
  • [15] Knowledge mining from trained neural networks
    Su, CT
    Hsu, HH
    Tsai, CH
    JOURNAL OF COMPUTER INFORMATION SYSTEMS, 2002, 42 (04) : 61 - 70
  • [16] Overfitting by PSO Trained Feedforward Neural Networks
    van Wyk, Andrich B.
    Engelbrecht, Andries P.
    2010 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2010,
  • [17] Spiking Neural Networks Trained via Proxy
    Kheradpisheh, Saeed Reza
    Mirsadeghi, Maryam
    Masquelier, Timothee
    IEEE ACCESS, 2022, 10 : 70769 - 70778
  • [18] Extracting propositions from trained neural networks
    Tsukimoto, H
    IJCAI-97 - PROCEEDINGS OF THE FIFTEENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS 1 AND 2, 1997, : 1098 - 1105
  • [19] Searching for Promisingly Trained Artificial Neural Networks
    Lujano-Rojas, Juan M.
    Dufo-Lopez, Rodolfo
    Artal-Sevil, Jesus Sergio
    Garcia-Paricio, Eduardo
    FORECASTING, 2023, 5 (03): : 550 - 575
  • [20] Interpretation of trained neural networks by rule extraction
    Palade, V
    Neagu, DC
    Patton, RJ
    COMPUTATIONAL INTELLIGENCE: THEORY AND APPLICATIONS, PROCEEDINGS, 2001, 2206 : 152 - 161