Improving Regularization in Deep Neural Networks by Co-adaptation Trace Detection

被引:4
|
作者
Moayed, Hojjat [1 ]
Mansoori, Eghbal G. [1 ]
机构
[1] Shiraz Univ, Sch Elect & Comp Engn, Shiraz, Iran
关键词
Deep neural networks; Regularization; Dropout; Co-adaptation; Under-; over-dropping; DROPOUT;
D O I
10.1007/s11063-023-11293-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Co-adaptation of units is one of the most critical concerns in deep neural networks (DNNs), which leads to overfitting. Dropout has been an exciting research subject in recent years to prevent overfitting. In previous studies, the dropout probability keeps fixed or changes simply during training epochs. However, we have no evidence which proves that the co-adaptation is uniformly distributed among units of models. Therefore, Dropout with identical probability for all units leads to an imbalance regularization and the under-/over-dropping problem. This paper proposes DropCT, a variant of Dropout that can detect co-adaptation traces (CTs) among units using the label propagation algorithm in community detection. It determines the DropCT probability of units in a CT according to its co-adaptation pressure. Therefore, DropCT applies a dynamic regularization to avoid under-/over-dropping. DropCT can integrate with different architectures as a general regularization method. Experimental results confirm that DropCT improves generalization and is comparatively simple to apply without tuning regularization hyperparameters.
引用
收藏
页码:7985 / 7997
页数:13
相关论文
共 50 条
  • [21] Regional Tree Regularization for Interpretability in Deep Neural Networks
    Wu, Mike
    Parbhoo, Sonali
    Hughes, Michael C.
    Kindle, Ryan
    Celi, Leo
    Zazzi, Maurizio
    Roth, Volker
    Doshi-Velez, Finale
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 6413 - 6421
  • [22] Learning Credible Deep Neural Networks with Rationale Regularization
    Du, Mengnan
    Liu, Ninghao
    Yang, Fan
    Hu, Xia
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 150 - 159
  • [23] GradAug: A New Regularization Method for Deep Neural Networks
    Yang, Taojiannan
    Zhu, Sijie
    Chen, Chen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [24] Towards Robustness of Deep Neural Networks via Regularization
    Li, Yao
    Min, Martin Renqiang
    Lee, Thomas
    Yu, Wenchao
    Kruus, Erik
    Wang, Wei
    Hsieh, Cho-Jui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7476 - 7485
  • [25] Bridgeout: Stochastic Bridge Regularization for Deep Neural Networks
    Khan, Najeeb
    Shah, Jawad
    Stavness, Ian
    IEEE ACCESS, 2018, 6 : 42961 - 42970
  • [26] Optimizing for interpretability in deep neural networks with tree regularization
    Wu M.
    Parbhoo S.
    Hughes M.C.
    Roth V.
    Doshi-Velez F.
    Journal of Artificial Intelligence Research, 2021, 72
  • [27] Theory of adaptive SVD regularization for deep neural networks
    Bejani, Mohammad Mahdi
    Ghatee, Mehdi
    NEURAL NETWORKS, 2020, 128 : 33 - 46
  • [28] Optimizing for Interpretability in Deep Neural Networks with Tree Regularization
    Wu, Mike
    Parbhoo, Sonali
    Hughes, Michael C.
    Roth, Volker
    Doshi-Velez, Finale
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2021, 72 : 1 - 37
  • [29] Generalize Deep Neural Networks With Adaptive Regularization for Classifying
    Guo, Kehua
    Tao, Ze
    Zhang, Lingyan
    Hu, Bin
    Kui, Xiaoyan
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (01) : 1216 - 1229
  • [30] IMPROVING SPEAKER RECOGNITION PERFORMANCE IN THE DOMAIN ADAPTATION CHALLENGE USING DEEP NEURAL NETWORKS
    Garcia-Romero, Daniel
    Zhang, Xiaohui
    McCree, Alan
    Povey, Daniel
    2014 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY SLT 2014, 2014, : 378 - 383