A new perspective for understanding generalization gap of deep neural networks trained with large batch sizes

被引:3
|
作者
Oyedotun, Oyebade K. [1 ]
Papadopoulos, Konstantinos [1 ]
Aouada, Djamila [1 ]
机构
[1] Univ Luxembourg, Interdisciplinary Ctr Secur Reliabil & Trust SnT, L-1855 Luxembourg, Luxembourg
关键词
Neural network; Large batch size; Generalization gap; Optimization; SINGULAR-VALUE DECOMPOSITION;
D O I
10.1007/s10489-022-04230-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are typically optimized using various forms of mini-batch gradient descent algorithm. A major motivation for mini-batch gradient descent is that with a suitably chosen batch size, available computing resources can be optimally utilized (including parallelization) for fast model training. However, many works report the progressive loss of model generalization when the training batch size is increased beyond some limits. This is a scenario commonly referred to as generalization gap. Although several works have proposed different methods for alleviating the generalization gap problem, a unanimous account for understanding generalization gap is still lacking in the literature. This is especially important given that recent works have observed that several proposed solutions for generalization gap problem such learning rate scaling and increased training budget do not indeed resolve it. As such, our main exposition in this paper is to investigate and provide new perspectives for the source of generalization loss for DNNs trained with a large batch size. Our analysis suggests that large training batch size results in increased near-rank loss of units' activation (i.e. output) tensors, which consequently impacts model optimization and generalization. Extensive experiments are performed for validation on popular DNN models such as VGG-16, residual network (ResNet-56) and LeNet-5 using CIFAR-10, CIFAR-100, Fashion-MNIST and MNIST datasets.
引用
收藏
页码:15621 / 15637
页数:17
相关论文
共 50 条
  • [41] Methods for interpreting and understanding deep neural networks
    Montavon, Gregoire
    Samek, Wojciech
    Mueller, Klaus-Robert
    DIGITAL SIGNAL PROCESSING, 2018, 73 : 1 - 15
  • [42] Predicting the generalization gap in neural networks using topological data analysis
    Ballester, Ruben
    Clemente, Xavier Arnal
    Casacuberta, Carles
    Madadi, Meysam
    Corneanu, Ciprian A.
    Escalera, Sergio
    NEUROCOMPUTING, 2024, 596
  • [43] Improving the Generalization of Deep Neural Networks in Seismic Resolution Enhancement
    Zhang, Haoran
    Alkhalifah, Tariq
    Liu, Yang
    Birnie, Claire
    Di, Xi
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [44] Generalization Analysis of Pairwise Learning for Ranking With Deep Neural Networks
    Huang, Shuo
    Zhou, Junyu
    Feng, Han
    Zhou, Ding-Xuan
    NEURAL COMPUTATION, 2023, 35 (06) : 1135 - 1158
  • [45] Generalization Comparison of Deep Neural Networks via Output Sensitivity
    Forouzesh, Mahsa
    Salehi, Farnood
    Thiran, Patrick
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7411 - 7418
  • [46] Quantitative analysis of the generalization ability of deep feedforward neural networks
    Yang, Yanli
    Li, Chenxia
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 40 (03) : 4867 - 4876
  • [47] Learning Cartographic Building Generalization with Deep Convolutional Neural Networks
    Feng, Yu
    Thiemann, Frank
    Sester, Monika
    ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2019, 8 (06)
  • [48] Improving generalization of deep neural networks by leveraging margin distribution
    Lyu, Shen-Huan
    Wang, Lu
    Zhou, Zhi-Hua
    NEURAL NETWORKS, 2022, 151 : 48 - 60
  • [49] Improving the Generalization of Deep Neural Networks in Seismic Resolution Enhancement
    Zhang, Haoran
    Alkhalifah, Tariq
    Liu, Yang
    Birnie, Claire
    Di, Xi
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [50] Sparsity-aware generalization theory for deep neural networks
    Muthukumar, Ramchandran
    Sulam, Jeremias
    THIRTY SIXTH ANNUAL CONFERENCE ON LEARNING THEORY, VOL 195, 2023, 195