A novel method to compute the weights of neural networks

被引:10
|
作者
Gao, Zhentao [1 ]
Chen, Yuanyuan [1 ]
Yi, Zhang [1 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Machine Intelligence Lab, Chengdu 610065, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural networks; Gradient free; Closed-form solution; White box models;
D O I
10.1016/j.neucom.2020.03.114
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are the main strength of modern artificial intelligence; they have demonstrated revolu-tionary performance in a wide range of applications. In practice, the weights of neural networks are gen-erally obtained indirectly using iterative training methods. Such methods are inefficient and problematic in many respects. Besides, neural networks trained end-to-end by such methods are typical black box models that are hard to interpret. Thus, it would be significantly better if the weights of a neural network could be calculated directly. In this paper, we located the key for calculating the weights of a neural net-work directly: assigning proper targets to the hidden units. Furthermore, if such targets are assigned, the neural network becomes a white box model that is easy to interpret. Thus, we propose a framework for solving the weights of a neural network and provide a sample implementation of the framework. The implementation was tested in various classification and regression experiments. Compared with neural networks trained using traditional methods, the constructed ones using solved weights had similar or better performance on many tasks, while remaining interpretable. Given the early stage of the proposed approach, many improvements are expectable in future developments. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:409 / 427
页数:19
相关论文
共 50 条
  • [41] Neural Networks with Superexpressive Activations and Integer Weights
    Beknazaryan, Aleksandr
    INTELLIGENT COMPUTING, VOL 2, 2022, 507 : 445 - 451
  • [42] Stochastic Weights Binary Neural Networks on FPGA
    Fukuda, Yasushi
    Kawahara, Takayuki
    2018 7TH IEEE INTERNATIONAL SYMPOSIUM ON NEXT-GENERATION ELECTRONICS (ISNE), 2018, : 220 - 222
  • [43] A Convolutional Accelerator for Neural Networks With Binary Weights
    Ardakani, Arash
    Condo, Carlo
    Gross, Warren J.
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [44] POET: an evo-devo method to optimize the weights of large artificial neural networks
    Fontana, Alessandro
    Soltoggio, Andrea
    Wrobel, Borys
    ALIFE 2014: THE FOURTEENTH INTERNATIONAL CONFERENCE ON THE SYNTHESIS AND SIMULATION OF LIVING SYSTEMS, 2014, : 447 - 454
  • [45] Backpropagation Learning Method with Interval Type-2 Fuzzy Weights in Neural Networks
    Gaxiola, Fernando
    Melin, Patricia
    Valdez, Fevrier
    Castillo, Oscar
    2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2013,
  • [46] DETERMINING INITIAL WEIGHTS OF FEEDFORWARD NEURAL NETWORKS BASED ON LEAST-SQUARES METHOD
    YAM, YF
    CHOW, TWS
    NEURAL PROCESSING LETTERS, 1995, 2 (02) : 13 - 17
  • [47] A Novel Diagnosis Method for SZ by Deep Neural Networks
    Qiao, Chen
    Shi, Yan
    Li, Bin
    An, Tai
    DATA MINING AND BIG DATA, DMBD 2017, 2017, 10387 : 433 - 441
  • [48] A NOVEL DESIGN METHOD FOR MULTILAYER FEEDFORWARD NEURAL NETWORKS
    LEE, J
    NEURAL COMPUTATION, 1994, 6 (05) : 885 - 901
  • [49] A novel fermentation control method based on neural networks
    Yang, XH
    Sun, ZH
    Sun, YX
    ADVANCES IN NEURAL NETWORKS - ISNN 2004, PT 2, 2004, 3174 : 194 - 199
  • [50] NOVEL LEARNING-METHOD FOR ANALOG NEURAL NETWORKS
    MATSUMOTO, T
    KOGA, M
    ELECTRONICS LETTERS, 1990, 26 (15) : 1136 - 1137