Deep neural networks with L1 and L2 regularization for high dimensional corporate credit risk prediction

被引:28
|
作者
Yang, Mei [1 ]
Lim, Ming K. [4 ]
Qu, Yingchi [1 ]
Li, Xingzhi [3 ]
Ni, Du [2 ]
机构
[1] Chongqing Univ, Sch Econ & Business Adm, Chongqing 400030, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Management, Jiangsu 210003, Peoples R China
[3] Chongqing Jiaotong Univ, Sch Econ & Management, Chongqing 400074, Peoples R China
[4] Univ Glasgow, Adam Smith Business Sch, Glasgow G14 8QQ, Scotland
关键词
High dimensional data; Credit risk; Deep neural network; Prediction; L1; regularization; SUPPORT VECTOR MACHINES; FEATURE-SELECTION; DECISION-MAKING; MODELS; CLASSIFICATION; SVM;
D O I
10.1016/j.eswa.2022.118873
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate credit risk prediction can help companies avoid bankruptcies and make adjustments ahead of time. There is a tendency in corporate credit risk prediction that more and more features are considered in the pre-diction system. However, this often brings redundant and irrelevant information which greatly impairs the performance of prediction algorithms. Therefore, this study proposes an HDNN algorithm that is an improved deep neural network (DNN) algorithm and can be used for high dimensional prediction of corporate credit risk. We firstly theoretically proved that there was no regularization effect when L1 regularization was added to the batch normalization layer of the DNN, which was a hidden rule in the industrial implementation but never been proved. In addition, we proved that adding L2 constraints on a single L1 regularization can solve the issue. Finally, this study analyzed a case study of credit data with supply chain and network data to show the supe-riority of the HDNN algorithm in the scenario of a high dimensional dataset.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Structured Pruning of Convolutional Neural Networks via L1 Regularization
    Yang, Chen
    Yang, Zhenghong
    Khattak, Abdul Mateen
    Yang, Liu
    Zhang, Wenxin
    Gao, Wanlin
    Wang, Minjuan
    IEEE ACCESS, 2019, 7 : 106385 - 106394
  • [32] L2 x L2 → L1 boundedness criteria
    Grafakos, Loukas
    He, Danqing
    Slavikova, Lenka
    MATHEMATISCHE ANNALEN, 2020, 376 (1-2) : 431 - 455
  • [33] Smooth group L1/2 regularization for input layer of feedforward neural networks
    Li, Feng
    Zurada, Jacek M.
    Wu, Wei
    NEUROCOMPUTING, 2018, 314 : 109 - 119
  • [34] Do L1 Reading Achievement and L1 Print Exposure Contribute to the Prediction of L2 Proficiency?
    Sparks, Richard L.
    Patton, Jon
    Ganschow, Leonore
    Humbach, Nancy
    LANGUAGE LEARNING, 2012, 62 (02) : 473 - 505
  • [35] Sparse smooth group L0°L1/2 regularization method for convolutional neural networks
    Quasdane, Mohamed
    Ramchoun, Hassan
    Masrour, Tawfik
    KNOWLEDGE-BASED SYSTEMS, 2024, 284
  • [36] Is there a critical period for L1 but not L2?
    Newport, Elissa L.
    BILINGUALISM-LANGUAGE AND COGNITION, 2018, 21 (05) : 928 - 929
  • [37] Planning to speak in L1 and L2
    Konopka, Agnieszka E.
    Meyer, Antje
    Forest, Tess A.
    COGNITIVE PSYCHOLOGY, 2018, 102 : 72 - 104
  • [38] Distinct neural substrates for word recognition between L1 and L2
    Yokoyama, Satoru
    Kim, Jungho
    Uchida, Shin-ya
    Okamoto, Hideyuki
    Bai, Chen
    Miyamoto, Tadao
    Yoshimoto, Kei
    Horie, Kaoru
    Sato, Shigeru
    Kawashima, Ryuta
    NEUROSCIENCE RESEARCH, 2006, 55 : S131 - S131
  • [39] Group Sparsity Residual Constraint Image Denoising Model with l1/l2 Regularization
    WU Di
    ZHANG Tao
    MO Xutao
    WuhanUniversityJournalofNaturalSciences, 2023, 28 (01) : 53 - 60
  • [40] Application of L1 - L2 Regularization in Sparse-View Photoacoustic Imaging Reconstruction
    Wang, Mengyu
    Dai, Shuo
    Wang, Xin
    Liu, Xueyan
    IEEE PHOTONICS JOURNAL, 2024, 16 (03): : 1 - 8