A grid fault diagnosis framework based on adaptive integrated decomposition and cross-modal attention fusion

被引:0
|
作者
Liu, Jiangxun [1 ]
Duan, Zhu [1 ]
Liu, Hui [1 ]
机构
[1] Cent South Univ, Inst Artificial Intelligence & Robot IAIR, Sch Traff & Transportat Engn, Key Lab Traff Safety Track,Minist Educ, Changsha 410075, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Smart grid; Fault diagnosis; Decomposition integration; Comprehensive information entropy; Cross -modal attention fusion; CONVOLUTIONAL NEURAL-NETWORK; INFORMATION ENTROPY; LMD;
D O I
10.1016/j.neunet.2024.106400
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In large-scale power systems, accurately detecting and diagnosing the type of faults when they occur in the grid is a challenging problem. The classification performance of most existing grid fault diagnosis methods depends on the richness and reliability of the data, in addition, it is difficult to obtain sufficient feature information from unimodal circuit signals. To address these issues, we propose a deep residual convolutional neural network (DRCNN)-based framework for grid fault diagnosis. First, we design a comprehensive information entropy value (CIEV) evaluation metric that combines fuzzy entropy (FuzEn) and mutual approximation entropy (MutEn) to integrate multiple decomposition subsequences. Then, DRCNN and heterogeneous graph transformer (HGT) are constructed for extracting multimodal features and considering modal variability. In addition, to obtain the implicit information of multimodal features and control the degree of their performance, we propose to incorporate the cross-modal attention fusion (CMAF) mechanism in the synthesis framework. We validate the proposed method on the three-phase transmission line dataset and VSB power line dataset with accuracies of 99.4 % and 99.0 %, respectively. The proposed method also achieves superior performance compared to classical and state-of-the-art methods.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Cross-modal attention fusion network for RGB-D semantic segmentation
    Zhao, Qiankun
    Wan, Yingcai
    Xu, Jiqian
    Fang, Lijin
    NEUROCOMPUTING, 2023, 548
  • [22] CCAFusion: Cross-Modal Coordinate Attention Network for Infrared and Visible Image Fusion
    Li, Xiaoling
    Li, Yanfeng
    Chen, Houjin
    Peng, Yahui
    Pan, Pan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (02) : 866 - 881
  • [23] HR and LiDAR Data Collaborative Semantic Segmentation Based on Adaptive Cross-Modal Fusion Network
    Ye, Zhen
    Li, Zhen
    Wang, Nan
    Li, Yuan
    Li, Wei
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 12153 - 12168
  • [24] Cross-modal body representation based on visual attention by saliency
    Hikita, Mai
    Fuke, Sawa
    Ogino, Masaki
    Asada, Minoru
    2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, 2008, : 2041 - +
  • [26] ACMFNet: Attention-Based Cross-Modal Fusion Network for Building Extraction of Remote Sensing Images
    Chen, Baiyu
    Pan, Zongxu
    Yang, Jianwei
    Long, Hui
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 14
  • [27] Cross-modal zero-sample diagnosis framework utilizing non-contact sensing data fusion
    Li, Sheng
    Feng, Ke
    Xu, Yadong
    Li, Yongbo
    Ni, Qing
    Zhang, Ke
    Wang, Yulin
    Ding, Weiping
    INFORMATION FUSION, 2024, 110
  • [28] Bearing Fault Diagnosis Under Multisensor Fusion Based on Modal Analysis and Graph Attention Network
    Meng, Ziran
    Zhu, Jun
    Cao, Shancheng
    Li, Pengfei
    Xu, Chao
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [29] Cross-Modal Retrieval Based on Semantic Filtering and Adaptive Pooling
    Qiao, Nan
    Mao, Junyi
    Xie, Hao
    Wang, Zhiguo
    Yin, Guangqiang
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND NETWORKS, VOL II, CENET 2023, 2024, 1126 : 296 - 310
  • [30] Cross-Modal Multistep Fusion Network With Co-Attention for Visual Question Answering
    Lao, Mingrui
    Guo, Yanming
    Wang, Hui
    Zhang, Xin
    IEEE ACCESS, 2018, 6 : 31516 - 31524