A novel method for financial distress prediction based on sparse neural networks with L1/2 regularization

被引:0
|
作者
Chen, Ying [1 ]
Guo, Jifeng [2 ]
Huang, Junqin [3 ]
Lin, Bin [3 ,4 ]
机构
[1] South China Normal Univ, Int Business Coll, Guangzhou 510631, Peoples R China
[2] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Peoples R China
[3] Sun Yat Sen Univ, Sch Business, Guangzhou 510275, Peoples R China
[4] Guangdong Ind Polytech, Guangzhou 510300, Peoples R China
基金
中国国家自然科学基金;
关键词
Financial distress prediction; Features selection; Sparse neural networks; L-1/2; regularization; BANKRUPTCY PREDICTION; DISCRIMINANT-ANALYSIS; FIRMS; SELECTION; RATIOS; REGRESSION; ABILITY; RISK;
D O I
10.1007/s13042-022-01566-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Corporate financial distress is related to the interests of the enterprise and stakeholders. Therefore, its accurate prediction is of great significance to avoid huge losses from them. Despite significant effort and progress in this field, the existing prediction methods are either limited by the number of input variables or restricted to those financial predictors. To alleviate those issues, both financial variables and non-financial variables are screened out from the existing accounting and finance theory to use as financial distress predictors. In addition, a novel method for financial distress prediction (FDP) based on sparse neural networks is proposed, namely FDP-SNN, in which the weight of the hidden layer is constrained with L-1/2 regularization to achieve the sparsity, so as to select relevant and important predictors, improving the predicted accuracy. It also provides support for the interpretability of the model. The results show that non-financial variables, such as investor protection and governance structure, play a key role in financial distress prediction than those financial ones, especially when the forecast period grows longer. By comparing those classic models proposed by predominant researchers in accounting and finance, the proposed model outperforms in terms of accuracy, precision, and AUC performance.
引用
收藏
页码:2089 / 2103
页数:15
相关论文
共 50 条
  • [31] Parameter choices for sparse regularization with the l1 norm
    Liu, Qianru
    Wang, Rui
    Xu, Yuesheng
    Yan, Mingsong
    INVERSE PROBLEMS, 2023, 39 (02)
  • [32] Sparse Hopfield network reconstruction with l1 regularization
    Huang, Haiping
    EUROPEAN PHYSICAL JOURNAL B, 2013, 86 (11):
  • [33] L1/2 regularization
    ZongBen Xu
    Hai Zhang
    Yao Wang
    XiangYu Chang
    Yong Liang
    Science China Information Sciences, 2010, 53 : 1159 - 1169
  • [34] L1/2 regularization
    XU ZongBen 1
    2 Department of Mathematics
    3 University of Science and Technology
    Science China(Information Sciences), 2010, 53 (06) : 1159 - 1169
  • [35] SPARSE DEEP NEURAL NETWORKS USING L1,∞-WEIGHT NORMALIZATION
    Wen, Ming
    Xu, Yixi
    Zheng, Yunling
    Yang, Zhouwang
    Wang, Xiao
    STATISTICA SINICA, 2021, 31 (03) : 1397 - 1414
  • [36] L1/2 regularization learning for smoothing interval neural networks: Algorithms and convergence analysis
    Yang, Dakun
    Liu, Yan
    NEUROCOMPUTING, 2018, 272 : 122 - 129
  • [37] Group L1/2 Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks
    Alemu, Habtamu Zegeye
    Zhao, Junhong
    Li, Feng
    Wu, Wei
    IEEE ACCESS, 2019, 7 : 9540 - 9557
  • [38] Sparse portfolio optimization via l1 over l2 regularization
    Wu, Zhongming
    Sun, Kexin
    Ge, Zhili
    Allen-Zhao, Zhihua
    Zeng, Tieyong
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2024, 319 (03) : 820 - 833
  • [39] A gradient projection method for smooth L1 norm regularization based seismic data sparse interpolation
    Li X.
    Yang T.
    Sun W.
    Wang B.
    Shiyou Diqiu Wuli Kantan/Oil Geophysical Prospecting, 2018, 53 (02): : 251 - 256
  • [40] Neural Networks with L1 Regularizer for Sparse Representation of Input Data
    Yu, Ju-dong
    Li, Feng
    Wu, Wei
    Wang, Jing
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND SOFTWARE ENGINEERING (AISE 2014), 2014, : 437 - 440