PAC-learning gains of Turing machines over circuits and neural networks

被引:1
|
作者
Pinon, Brieuc [1 ,2 ]
Jungers, Raphael [1 ]
Delvenne, Jean-Charles [1 ]
机构
[1] UCLouvain, ICTEAM, INMA, Louvain la Neuve, Belgium
[2] Ave Georges Lemaitre 4 6 L4 05 01, B-1348 Louvain la Neuve, Belgium
基金
欧洲研究理事会;
关键词
Kolmogorov complexity; Minimum description length; PAC-learning; Computational complexity; Deep learning; Program induction; FORMAL THEORY; COMPLEXITY; BOUNDS; SIZE;
D O I
10.1016/j.physd.2022.133585
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
A caveat to many applications of the current Deep Learning approach is the need for large-scale data. One improvement suggested by Kolmogorov Complexity results is to apply the minimum description length principle with computationally universal models. We study the potential gains in sample efficiency that this approach can bring in principle. We use polynomial-time Turing machines to represent computationally universal models and Boolean circuits to represent Artificial Neural Networks (ANNs) acting on finite-precision digits.Our analysis unravels direct links between our question and Computational Complexity results. We provide lower and upper bounds on the potential gains in sample efficiency between the MDL applied with Turing machines instead of ANNs. Our bounds depend on the bit-size of the input of the Boolean function to be learned. Furthermore, we highlight close relationships between classical open problems in Circuit Complexity and the tightness of these bounds.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:15
相关论文
共 50 条
  • [32] Analog VLSI circuits for learning rate adaptation in self-organizing neural networks
    Sellami, L
    Newcomb, RW
    Ferrandez, JM
    Rodellar, V
    Gomez, P
    Roa, L
    IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE, 1998, : 541 - 546
  • [33] Robust Meta-Learning over Graphs with Graph Neural Networks
    Sadeghi, Alireza
    Giannakis, Georgios B.
    2024 IEEE 13RD SENSOR ARRAY AND MULTICHANNEL SIGNAL PROCESSING WORKSHOP, SAM 2024, 2024,
  • [34] The Impact of Dataset Complexity on Transfer Learning over Convolutional Neural Networks
    Wanderley, Miguel D. de S.
    de A. e Bueno, Leonardo
    Zanchettin, Cleber
    Oliveira, Adriano L. I.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 582 - 589
  • [35] On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective
    Su, Lili
    Yang, Pengkun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [36] On the regularized risk of distributionally robust learning over deep neural networks
    Trillos, Camilo Andres Garcia
    Trillos, Nicolas Garcia
    RESEARCH IN THE MATHEMATICAL SCIENCES, 2022, 9 (03)
  • [37] On the regularized risk of distributionally robust learning over deep neural networks
    Camilo Andrés García Trillos
    Nicolás García Trillos
    Research in the Mathematical Sciences, 2022, 9
  • [38] SlimFL: Federated Learning With Superposition Coding Over Slimmable Neural Networks
    Yun, Won Joon
    Kwak, Yunseok
    Baek, Hankyul
    Jung, Soyi
    Ji, Mingyue
    Bennis, Mehdi
    Park, Jihong
    Kim, Joongheon
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2023, 31 (06) : 2499 - 2514
  • [39] Geometric preprocessing, geometric feedforward neural networks and Clifford support vector machines for visual learning
    Bayro-Corrochano, E
    Vallejo, R
    Arana-Daniel, N
    NEUROCOMPUTING, 2005, 67 : 54 - 105
  • [40] Evolving Spiking Neural Networks for online learning over drifting data streams
    Lobo, Jesus L.
    Lana, Ibai
    Del Ser, Javier
    Bilbao, Miren Nekane
    Kasabov, Nikola
    NEURAL NETWORKS, 2018, 108 : 1 - 19