Training Neural Networks is ∃R-complete

被引:0
|
作者
Abrahamsen, Mikkel [1 ]
Kleist, Linda [2 ]
Miltzow, Tillmann [3 ]
机构
[1] Univ Copenhagen, Copenhagen, Denmark
[2] Tech Univ Carolo Wilhelmina Braunschweig, Braunschweig, Germany
[3] Univ Utrecht, Utrecht, Netherlands
关键词
NODE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Given a neural network, training data, and a threshold, finding weights for the neural network such that the total error is below the threshold is known to be NP-hard. We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is there exists R-complete. This means that the problem is equivalent, up to polynomial time reductions, to deciding whether a system of polynomial equations and inequalities with integer coefficients and real unknowns has a solution. If, as widely expected, there exists R is strictly larger than NP, our work implies that the problem of training neural networks is not even in NP. Neural networks are usually trained using some variation of backpropagation. The result of this paper offers an explanation why techniques commonly used to solve big instances of NP-complete problems seem not to be of use for this task. Examples of such techniques are SAT solvers, IP solvers, local search, dynamic programming, to name a few general ones.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Visualizations of the training process of neural networks
    Babic, Karlo
    Mestrovic, Ana
    2019 42ND INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2019, : 1619 - 1623
  • [42] Nonsmooth training of fuzzy neural networks
    C. Eitzinger
    Soft Computing, 2004, 8 : 443 - 448
  • [43] Hybrid training of optical neural networks
    Spall, James
    Guo, Xianxin
    Lvovsky, A., I
    OPTICA, 2022, 9 (07): : 803 - 811
  • [44] Training of neural networks with search behaviour
    Baskanova T.F.
    Lankin Y.P.
    Russian Physics Journal, 2002, 45 (4) : 389 - 393
  • [45] Training Invertible Neural Networks as Autoencoders
    The-Gia Leo Nguyen
    Ardizzone, Lynton
    Koethe, Ullrich
    PATTERN RECOGNITION, DAGM GCPR 2019, 2019, 11824 : 442 - 455
  • [46] Fast and Efficient and Training of Neural Networks
    Yu, Hao
    Wilamowski
    3RD INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION, 2010, : 175 - 181
  • [47] Training deep quantum neural networks
    Beer, Kerstin
    Bondarenko, Dmytro
    Farrelly, Terry
    Osborne, Tobias J.
    Salzmann, Robert
    Scheiermann, Daniel
    Wolf, Ramona
    NATURE COMMUNICATIONS, 2020, 11 (01)
  • [48] Training neural networks by stochastic optimisation
    Verikas, A
    Gelzinis, A
    NEUROCOMPUTING, 2000, 30 (1-4) : 153 - 172
  • [49] Online Normalization for Training Neural Networks
    Chiley, Vitaliy
    Sharapov, Ilya
    Kosson, Atli
    Koster, Urs
    Reece, Ryan
    de la Fuente, Sofia Samaniego
    Subbiah, Vishal
    James, Michael
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [50] Random Search in Neural Networks Training
    Krasnoproshin, V. V.
    Matskevich, V. V.
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2024, 34 (02) : 309 - 316