Training Neural Networks is ∃R-complete

被引:0
|
作者
Abrahamsen, Mikkel [1 ]
Kleist, Linda [2 ]
Miltzow, Tillmann [3 ]
机构
[1] Univ Copenhagen, Copenhagen, Denmark
[2] Tech Univ Carolo Wilhelmina Braunschweig, Braunschweig, Germany
[3] Univ Utrecht, Utrecht, Netherlands
关键词
NODE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Given a neural network, training data, and a threshold, finding weights for the neural network such that the total error is below the threshold is known to be NP-hard. We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is there exists R-complete. This means that the problem is equivalent, up to polynomial time reductions, to deciding whether a system of polynomial equations and inequalities with integer coefficients and real unknowns has a solution. If, as widely expected, there exists R is strictly larger than NP, our work implies that the problem of training neural networks is not even in NP. Neural networks are usually trained using some variation of backpropagation. The result of this paper offers an explanation why techniques commonly used to solve big instances of NP-complete problems seem not to be of use for this task. Examples of such techniques are SAT solvers, IP solvers, local search, dynamic programming, to name a few general ones.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Training Neural Networks for and by Interpolation
    Berrada, Leonard
    Zisserman, Andrew
    Kumar, M. Pawan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [22] Training Sparse Neural Networks
    Srinivas, Suraj
    Subramanya, Akshayvarun
    Babu, R. Venkatesh
    2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 455 - 462
  • [23] Fuzzy training for neural networks
    Kermani, BG
    Schiffman, SS
    Nagle, HT
    8TH WORLD MULTI-CONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL IX, PROCEEDINGS: COMPUTER SCIENCE AND ENGINEERING: I, 2004, : 124 - 126
  • [24] Multirate Training of Neural Networks
    Vlaar, Tiffany
    Leimkuhler, Benedict
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [25] ON TRAINING FEEDFORWARD NEURAL NETWORKS
    KAK, S
    PRAMANA-JOURNAL OF PHYSICS, 1993, 40 (01): : 35 - 42
  • [26] Training feedforward neural networks using neural networks and genetic algorithms
    Tellez, P
    Tang, Y
    INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, PROCEEDINGS, 2004, : 308 - 311
  • [27] Complete and representative training of neural networks: A generalization study using double noise injection and natural images
    Zhang, Chao
    van der Baan, Mirko
    GEOPHYSICS, 2021, 86 (03) : V197 - V206
  • [28] Efficient training of backpropagation neural networks
    Otair, Mohammed A.
    Salameh, Walid A.
    NEURAL NETWORK WORLD, 2006, 16 (04) : 291 - 311
  • [29] Recursive training of neural networks for classification
    Aladjem, M
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (02): : 496 - 503
  • [30] Training Optimization for Artificial Neural Networks
    Toribio Luna, Primitivo
    Alejo Eleuterio, Roberto
    Valdovinos Rosas, Rosa Maria
    Rodriguez Mendez, Benjamin Gonzalo
    CIENCIA ERGO-SUM, 2010, 17 (03) : 313 - 317