Financial Anti-Fraud Based on Dual-Channel Graph Attention Network

被引:3
|
作者
Wei, Sizheng [1 ,2 ]
Lee, Suan [2 ]
机构
[1] Xuzhou Univ Technol, Sch Finance, Xuzhou 221018, Peoples R China
[2] Semyung Univ, Sch Comp Sci, Jecheon Si 27136, South Korea
基金
新加坡国家研究基金会;
关键词
financial anti-fraud; graph neural networks; graph attention network; deep learning; blockchain;
D O I
10.3390/jtaer19010016
中图分类号
F [经济];
学科分类号
02 ;
摘要
This article addresses the pervasive issue of fraud in financial transactions by introducing the Graph Attention Network (GAN) into graph neural networks. The article integrates Node Attention Networks and Semantic Attention Networks to construct a Dual-Head Attention Network module, enabling a comprehensive analysis of complex relationships in user transaction data. This approach adeptly handles non-linear features and intricate data interaction relationships. The article incorporates a Gradient-Boosting Decision Tree (GBDT) to enhance fraud identification to create the GBDT-Dual-channel Graph Attention Network (GBDT-DGAN). In a bid to ensure user privacy, this article introduces blockchain technology, culminating in the development of a financial anti-fraud model that fuses blockchain with the GBDT-DGAN algorithm. Experimental verification demonstrates the model's accuracy, reaching 93.82%, a notable improvement of at least 5.76% compared to baseline algorithms such as Convolutional Neural Networks. The recall and F1 values stand at 89.5% and 81.66%, respectively. Additionally, the model exhibits superior network data transmission security, maintaining a packet loss rate below 7%. Consequently, the proposed model significantly outperforms traditional approaches in financial fraud detection accuracy and ensures excellent network data transmission security, offering an efficient and secure solution for fraud detection in the financial domain.
引用
下载
收藏
页码:297 / 314
页数:18
相关论文
共 50 条
  • [31] Improving transaction safety via anti-fraud protection based on blockchain
    Ren, Yong
    Ren, Yan
    Tian, Hongwei
    Song, Wei
    Yang, Yanhong
    CONNECTION SCIENCE, 2023, 35 (01)
  • [32] A novel dual-channel graph convolutional neural network for facial action unit recognition
    Jia, Xibin
    Xu, Shaowu
    Zhou, Yuhan
    Wang, Luo
    Li, Weiting
    PATTERN RECOGNITION LETTERS, 2023, 166 : 61 - 68
  • [33] Financial transaction fraud detector based on imbalance learning and graph neural network
    Tong, Guoxiang
    Shen, Jieyu
    APPLIED SOFT COMPUTING, 2023, 149
  • [34] Dual-channel feature extraction hybrid attention network for detecting infrared small targets
    Nie, Suzhen
    Cao, Jie
    Miao, Jiaqi
    Hou, Haiyuan
    Hao, Qun
    Zhuang, Xuye
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (12)
  • [35] An attention involved network stacked by dual-channel residual block for hyperspectral image classification
    Deng, Ziqing
    Wang, Yang
    Li, Linwei
    Zhang, Bing
    Zhang, Zhengli
    Bian, Lifeng
    Ding, Zhao
    Yang, Chen
    INFRARED PHYSICS & TECHNOLOGY, 2022, 122
  • [36] Dual-channel compression mapping network with fused attention mechanism for medical image segmentation
    Xiaokang Ding
    Ke’er Qian
    Qile Zhang
    Xiaoliang Jiang
    Ling Dong
    Scientific Reports, 15 (1)
  • [37] A big data-based anti-fraud model for internet finance
    Liu F.
    You Y.
    Revue d'Intelligence Artificielle, 2020, 34 (04) : 501 - 506
  • [38] Isomer: Transfer enhanced dual-channel heterogeneous dependency attention network for aspect-based sentiment classification
    Cao, Yukun
    Tang, Yijia
    Du, Haizhou
    Xu, Feifei
    Wei, Ziyue
    Jin, Chengkun
    KNOWLEDGE-BASED SYSTEMS, 2022, 256
  • [39] Isomer: Transfer enhanced dual-channel heterogeneous dependency attention network for aspect-based sentiment classification
    Cao, Yukun
    Tang, Yijia
    Du, Haizhou
    Xu, Feifei
    Wei, Ziyue
    Jin, Chengkun
    Knowledge-Based Systems, 2022, 256
  • [40] Dual-channel deep graph convolutional neural networks
    Ye, Zhonglin
    Li, Zhuoran
    Li, Gege
    Zhao, Haixing
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7