A novel method to compute the weights of neural networks

被引:10
|
作者
Gao, Zhentao [1 ]
Chen, Yuanyuan [1 ]
Yi, Zhang [1 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Machine Intelligence Lab, Chengdu 610065, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural networks; Gradient free; Closed-form solution; White box models;
D O I
10.1016/j.neucom.2020.03.114
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are the main strength of modern artificial intelligence; they have demonstrated revolu-tionary performance in a wide range of applications. In practice, the weights of neural networks are gen-erally obtained indirectly using iterative training methods. Such methods are inefficient and problematic in many respects. Besides, neural networks trained end-to-end by such methods are typical black box models that are hard to interpret. Thus, it would be significantly better if the weights of a neural network could be calculated directly. In this paper, we located the key for calculating the weights of a neural net-work directly: assigning proper targets to the hidden units. Furthermore, if such targets are assigned, the neural network becomes a white box model that is easy to interpret. Thus, we propose a framework for solving the weights of a neural network and provide a sample implementation of the framework. The implementation was tested in various classification and regression experiments. Compared with neural networks trained using traditional methods, the constructed ones using solved weights had similar or better performance on many tasks, while remaining interpretable. Given the early stage of the proposed approach, many improvements are expectable in future developments. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:409 / 427
页数:19
相关论文
共 50 条
  • [1] A weighting method for Hopfield neural networks with discrete weights
    Mizutani, H
    ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE, 1996, 79 (09): : 67 - 75
  • [2] A multiobjective continuation method to compute the regularization path of deep neural networks
    Amakor, Augustina Chidinma
    Sonntag, Konstantin
    Peitz, Sebastian
    MACHINE LEARNING WITH APPLICATIONS, 2025, 19
  • [3] Weights set selection method for feed forward neural networks
    Ene, Alexandru
    Stirbu, Cosmin
    2013 INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE (ECAI), 2013,
  • [4] A comparison of the weights-of-evidence method and probabilistic neural networks
    Singer D.A.
    Kouda R.
    Natural Resources Research, 1999, 8 (4) : 287 - 298
  • [5] NEURAL NETWORKS TO COMPUTE MOTION IN MOLECULES
    LIEBOVITCH, LS
    ARNOLD, ND
    SELECTOR, LY
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 1994, 35 (04) : 1452 - 1452
  • [6] Silicon neural networks learn as they compute
    Paillet, G
    LASER FOCUS WORLD, 1996, 32 (08): : S17 - S19
  • [7] On neural networks with minimal weights
    Bohossian, V
    Bruck, J
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 8: PROCEEDINGS OF THE 1995 CONFERENCE, 1996, 8 : 246 - 252
  • [8] A novel metaheuristic population algorithm for optimising the connection weights of neural networks
    Mousavirad, Seyed Jalaleddin
    Schaefer, Gerald
    Rezaee, Khosro
    Oliva, Diego
    Zabihzadeh, Davood
    Chakrabortty, Ripon K.
    Mohammadigheymasi, Hamzeh
    Pedram, Mehdi
    EVOLVING SYSTEMS, 2025, 16 (01)
  • [9] Bidirectional clustering of weights for neural networks with common weights
    Saito, Kazumi
    Nakano, Ryohei
    Systems and Computers in Japan, 2007, 38 (10): : 46 - 57
  • [10] NEURAL NETWORKS TO COMPUTE MOLECULAR-DYNAMICS
    LIEBOVITCH, LS
    ARNOLD, ND
    SELECTOR, LY
    BIOPHYSICAL JOURNAL, 1994, 66 (02) : A391 - A391