Reducing Flipping Errors in Deep Neural Networks

被引:0
|
作者
Deng, Xiang [1 ,2 ]
Xiao, Yun [2 ]
Long, Bo [2 ]
Zhang, Zhongfei [1 ]
机构
[1] SUNY Binghamton, Dept Comp Sci, Binghamton, NY 13902 USA
[2] JD Com, Beijing, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have been widely applied in various domains in artificial intelligence including computer vision and natural language processing. A DNN is typically trained for many epochs and then a validation dataset is used to select the DNN in an epoch (we simply call this epoch "the last epoch") as the final model for making predictions on unseen samples, while it usually cannot achieve a perfect accuracy on unseen samples. An interesting question is "how many test (unseen) samples that a DNN misclassifies in the last epoch were ever correctly classified by the DNN before the last epoch?". In this paper, we empirically study this question and find on several benchmark datasets that the vast majority of the misclassified samples in the last epoch were ever classified correctly before the last epoch, which means that the predictions for these samples were flipped from "correct" to "wrong". Motivated by this observation, we propose to restrict the behavior changes of a DNN on the correctly-classified samples so that the correct local boundaries can be maintained and the flipping error on unseen samples can be largely reduced. Extensive experiments(1) on different benchmark datasets with different modern network architectures demonstrate that the proposed flipping error reduction (FER) approach can substantially improve the generalization, the robustness, and the transferability of DNNs without introducing any additional network parameters or inference cost, only with a negligible training overhead.
引用
收藏
页码:6506 / 6514
页数:9
相关论文
共 50 条
  • [1] Reducing Image Compression Artifacts for Deep Neural Networks
    Ma, Li
    Peng, Peixi
    Xing, Peiyin
    Wang, Yaowei
    Tian, Yonghong
    [J]. 2021 DATA COMPRESSION CONFERENCE (DCC 2021), 2021, : 355 - 355
  • [2] Reducing the Spike Rate in Deep Spiking Neural Networks
    Fontanini, Riccardo
    Esseni, David
    Loghi, Mirko
    [J]. PROCEEDINGS OF INTERNATIONAL CONFERENCE ON NEUROMORPHIC SYSTEMS 2022, ICONS 2022, 2022,
  • [3] Coin-Flipping Neural Networks
    Sieradzki, Yuval
    Hodos, Nitzan
    Yehuda, Gal
    Schuster, Assaf
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [4] Errors of neural networks
    Shubnikov, EI
    [J]. OPTICS AND SPECTROSCOPY, 2000, 88 (03) : 459 - 465
  • [5] Errors of neural networks
    E. I. Shubnikov
    [J]. Optics and Spectroscopy, 2000, 88 : 459 - 465
  • [6] Deep neural networks to correct sub-precision errors in CFD
    Haridas, Akash
    Vadlamani, Nagabhushana Rao
    Minamoto, Yuki
    [J]. APPLICATIONS IN ENERGY AND COMBUSTION SCIENCE, 2022, 12
  • [7] Reducing the Model Order of Deep Neural Networks Using Information Theory
    Tu, Ming
    Berisha, Visar
    Cao, Yu
    Seo, Jae-sun
    [J]. 2016 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI), 2016, : 93 - 98
  • [8] Reducing Overfitting in Deep Convolutional Neural Networks Using Redundancy Regularizer
    Wu, Bingzhe
    Liu, Zhichao
    Yuan, Zhihang
    Sun, Guangyu
    Wu, Charles
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 49 - 55
  • [9] Deep Confidence: A Computationally Efficient Framework for Calculating Reliable Prediction Errors for Deep Neural Networks
    Cortes-Ciriano, Isidro
    Bender, Andreas
    [J]. JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2019, 59 (03) : 1269 - 1281
  • [10] SnaPEA: Predictive Early Activation for Reducing Computation in Deep Convolutional Neural Networks
    Akhlaghi, Vahideh
    Yazdanbakhsh, Amir
    Samadi, Kambiz
    Gupta, Rajesh K.
    Esmaeilzadeh, Hadi
    [J]. 2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2018, : 662 - 673