Deep neural networks with L1 and L2 regularization for high dimensional corporate credit risk prediction

被引:28
|
作者
Yang, Mei [1 ]
Lim, Ming K. [4 ]
Qu, Yingchi [1 ]
Li, Xingzhi [3 ]
Ni, Du [2 ]
机构
[1] Chongqing Univ, Sch Econ & Business Adm, Chongqing 400030, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Management, Jiangsu 210003, Peoples R China
[3] Chongqing Jiaotong Univ, Sch Econ & Management, Chongqing 400074, Peoples R China
[4] Univ Glasgow, Adam Smith Business Sch, Glasgow G14 8QQ, Scotland
关键词
High dimensional data; Credit risk; Deep neural network; Prediction; L1; regularization; SUPPORT VECTOR MACHINES; FEATURE-SELECTION; DECISION-MAKING; MODELS; CLASSIFICATION; SVM;
D O I
10.1016/j.eswa.2022.118873
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate credit risk prediction can help companies avoid bankruptcies and make adjustments ahead of time. There is a tendency in corporate credit risk prediction that more and more features are considered in the pre-diction system. However, this often brings redundant and irrelevant information which greatly impairs the performance of prediction algorithms. Therefore, this study proposes an HDNN algorithm that is an improved deep neural network (DNN) algorithm and can be used for high dimensional prediction of corporate credit risk. We firstly theoretically proved that there was no regularization effect when L1 regularization was added to the batch normalization layer of the DNN, which was a hidden rule in the industrial implementation but never been proved. In addition, we proved that adding L2 constraints on a single L1 regularization can solve the issue. Finally, this study analyzed a case study of credit data with supply chain and network data to show the supe-riority of the HDNN algorithm in the scenario of a high dimensional dataset.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] A new Sigma-Pi-Sigma neural network based on L1 and L2 regularization and applications
    Jiao, Jianwei
    Su, Keqin
    AIMS MATHEMATICS, 2024, 9 (03): : 5995 - 6012
  • [22] Towards l1 Regularization for Deep Neural Networks: Model Sparsity Versus Task Difficulty
    Shen, Ta-Chun
    Yang, Chun-Pai
    Yen, Ian En-Hsu
    Lin, Shou-De
    2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 126 - 134
  • [23] WILLETTS 'L1 AND L2'
    CRAIG, R
    DRAMA, 1976, (120): : 72 - 73
  • [24] L1/2 regularization
    ZongBen Xu
    Hai Zhang
    Yao Wang
    XiangYu Chang
    Yong Liang
    Science China Information Sciences, 2010, 53 : 1159 - 1169
  • [25] L1/2 regularization
    XU ZongBen 1
    2 Department of Mathematics
    3 University of Science and Technology
    Science China(Information Sciences), 2010, 53 (06) : 1159 - 1169
  • [26] Prune Deep Neural Networks With the Modified L1/2 Penalty
    Chang, Jing
    Sha, Jin
    IEEE ACCESS, 2019, 7 : 2273 - 2280
  • [27] Image Reconstruction in Ultrasonic Transmission Tomography Using L1/L2 Regularization
    Li, Aoyu
    Liang, Guanghui
    Dong, Feng
    2024 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE, I2MTC 2024, 2024,
  • [28] Prediction and integration of semantics during L2 and L1 listening
    Dijkgraaf, Aster
    Hartsuiker, Robert J.
    Duyck, Wouter
    LANGUAGE COGNITION AND NEUROSCIENCE, 2019, 34 (07) : 881 - 900
  • [29] Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l1/l2 Regularization
    Repetti, Audrey
    Mai Quyen Pham
    Duval, Laurent
    Chouzenoux, Emilie
    Pesquet, Jean-Christophe
    IEEE SIGNAL PROCESSING LETTERS, 2015, 22 (05) : 539 - 543
  • [30] ON L1 TRANSFER IN L2 COMPREHENSION AND L2 PRODUCTION
    RINGBOM, H
    LANGUAGE LEARNING, 1992, 42 (01) : 85 - 112