Sample compression, support vectors, and generalization in deep learning

被引:4
|
作者
Snyder C. [1 ]
Vishwanath S. [1 ]
机构
[1] The Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, 78712, TX
关键词
Deep neural networks; Generalization; Sample compression;
D O I
10.1109/JSAIT.2020.2981864
中图分类号
学科分类号
摘要
Even though Deep Neural Networks (DNNs) are widely celebrated for their practical performance, they possess many intriguing properties related to depth that are difficult to explain both theoretically and intuitively. Understanding how weights in deep networks coordinate together across layers to form useful learners has proven challenging, in part because the repeated composition of nonlinearities has proved intractable. This paper presents a reparameterization of DNNs as a linear function of a feature map that is locally independent of the weights. This feature map transforms depth-dependencies into simple tensor products and maps each input to a discrete subset of the feature space. Then, using a max-margin assumption, the paper develops a sample compression representation of the neural network in terms of the discrete activation state of neurons induced by s “support vectors”. The paper shows that the number of support vectors s relates with learning guarantees for neural networks through sample compression bounds, yielding a sample complexity of O(ns/ε) for networks with n neurons. Finally, the number of support vectors s is found to monotonically increase with width and label noise but decrease with depth. © 2020 IEEE.
引用
收藏
页码:106 / 120
页数:14
相关论文
共 50 条
  • [1] Compression, Generalization and Learning
    Campi, Marco C.
    Garatti, Simone
    [J]. Journal of Machine Learning Research, 2023, 24 (339)
  • [2] Compression, Generalization and Learning
    Campi, Marco C.
    Garatti, Simone
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [3] A Generalization Sample Learning Method of Deep Learning for Semantic Segmentation of Remote Sensing Images
    Zheng, Chen
    Li, Jingying
    Chen, Yuncheng
    Wang, Leiguang
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [4] Compression Techniques for Deep Fisher Vectors
    Ahmed, Sarah
    Azim, Tayyaba
    [J]. ICPRAM: PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS, 2017, : 217 - 224
  • [5] Exploring Generalization in Deep Learning
    Neyshabur, Behnam
    Bhojanapalli, Srinadh
    McAllester, David
    Srebro, Nathan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [6] Generalization Error in Deep Learning
    Jakubovitz, Daniel
    Giryes, Raja
    Rodrigues, Miguel R. D.
    [J]. COMPRESSED SENSING AND ITS APPLICATIONS, 2019, : 153 - 193
  • [7] Learning Dynamics and Generalization in Deep Reinforcement Learning
    Lyle, Clare
    Rowland, Mark
    Dabney, Will
    Kwiatkowksa, Marta
    Gal, Yarin
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [8] Generalization bottleneck in deep metric learning
    Hu, Zhanxuan
    Wu, Danyang
    Nie, Feiping
    Wang, Rong
    [J]. INFORMATION SCIENCES, 2021, 581 : 249 - 261
  • [9] Deep Learning Semantic Compression: IoT Support over LORA Use Case
    Dridi, Aicha
    Debar, Arnaud
    Gauthier, Vincent
    Ibn Khedher, Hatem
    Afifi, Hossam
    [J]. 2019 2ND IEEE MIDDLE EAST AND NORTH AFRICA COMMUNICATIONS CONFERENCE (IEEEMENACOMM'19), 2019, : 267 - 272
  • [10] Visualizing Support Vectors and Topological Data Mapping for Improved Generalization Capabilities
    Madokoro, Hirokazu
    Sato, Kazuhito
    [J]. 2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,