Sparse Self-Attentive Transformer With Multiscale Feature Fusion on Long-Term SOH Forecasting

被引:1
|
作者
Zhu, Xinshan [1 ,2 ]
Xu, Chengqian [1 ,2 ]
Song, Tianbao [1 ,2 ]
Huang, Zhen [3 ]
Zhang, Yun [1 ,2 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Tianjin Univ, Natl Ind Educ Platform Energy Storage, Tianjin 300350, Peoples R China
[3] Nanchang Univ, Sch Informat Engn, Nanchang, Peoples R China
基金
中国国家自然科学基金;
关键词
Batteries; Integrated circuit modeling; Feature extraction; Lithium-ion batteries; Adaptation models; Load modeling; Transformers; Cross-stage partial (CSP)-ProbSparse attention; lithium battery; multiscale feature fusion; state of health (SOH); transformer; STATE-OF-HEALTH; LITHIUM-ION BATTERIES;
D O I
10.1109/TPEL.2024.3395180
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The estimation of the state of health (SOH) of lithium-ion batteries (LIBs) plays an important role in ensuring the safe and stable operation of LIB management systems. In order to more accurately predict SOH, a model based on a sparse self-attentive transformer (SSAT) with multitimescale feature fusion is proposed. The SSAT follows an encoder-decoder structure construction, and the model inputs are the extracted health indicators and SOH sequences. The encoder stacks three cross-stage partial (CSP)-ProbSparse attention self-attention blocks, between every two CSP-ProbSparse attention blocks, connections are made by dilated causal convolution and max-pooling layers to obtain exponential growth of the sensory field. All the feature maps output from the self-attention blocks are integrated by multiscale feature fusion, and finally, the appropriate feature dimensions are fed to the decoder through a transition layer to obtain the estimation of SOH. Numerous comparative and ablation experiments have demonstrated that the SSAT model achieves superior performance in a wide range of situations.
引用
收藏
页码:10399 / 10408
页数:10
相关论文
共 50 条
  • [1] Self-Attentive Feature-level Fusion for Multimodal Emotion Detection
    Hazarika, Devamanyu
    Gorantla, Sruthi
    Poria, Soujanya
    Zimmermann, Roger
    [J]. IEEE 1ST CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2018), 2018, : 196 - 201
  • [2] Long-Term Load Forecasting Based on Feature fusion and LightGBM
    Tan, Yao
    Teng, Zhenshan
    Zhang, Chao
    Zuo, Gao
    Wang, Zhiguang
    Zhao, Zhengjia
    [J]. 2021 IEEE THE 4TH INTERNATIONAL CONFERENCE ON POWER AND ENERGY APPLICATIONS (ICPEA 2021), 2021, : 104 - 109
  • [3] A Deformable and Multi-Scale Network with Self-Attentive Feature Fusion for SAR Ship Classification
    Chen, Peng
    Zhou, Hui
    Li, Ying
    Liu, Bingxin
    Liu, Peng
    [J]. JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2024, 12 (09)
  • [4] xMTrans: Temporal Attentive Cross-Modality Fusion Transformer for Long-Term Traffic Prediction
    Ung, Huy Quang
    Niu, Hao
    Dao, Minh-Son
    Wada, Shinya
    Minamikawa, Atsunori
    [J]. PROCEEDINGS OF THE 2024 25TH IEEE INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT, MDM 2024, 2024, : 195 - 202
  • [5] D-Net: Learning for distinctive point clouds by self-attentive point searching and learnable feature fusion
    Liu, Xinhai
    Han, Zhizhong
    Lee, Sanghuk
    Cao, Yan-Pei
    Liu, Yu -Shen
    [J]. COMPUTER AIDED GEOMETRIC DESIGN, 2023, 104
  • [6] Resformer: Combine quadratic linear transformation with efficient sparse Transformer for long-term series forecasting
    Chen, Gongguan
    Wang, Hua
    Liu, Yepeng
    Zhang, Mingli
    Zhang, Fan
    [J]. INTELLIGENT DATA ANALYSIS, 2023, 27 (06) : 1557 - 1572
  • [7] Quaternion-Based Self-Attentive Long Short-term User Preference Encoding for Recommendation
    Thanh Tran
    You, Di
    Lee, Kyumin
    [J]. CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 1455 - 1464
  • [8] PWDformer: Deformable transformer for long-term series forecasting
    Wang, Zheng
    Ran, Haowei
    Ren, Jinchang
    Sun, Meijun
    [J]. PATTERN RECOGNITION, 2024, 147
  • [9] Forecasting Variation Trends of Stocks via Multiscale Feature Fusion and Long Short-Term Memory Learning
    Liu, Yezhen
    Yu, Xilong
    Wu, Yanhua
    Song, Shuhong
    [J]. SCIENTIFIC PROGRAMMING, 2021, 2021
  • [10] Long-Term Object Tracking Based On Feature Fusion
    Ge Baoyi
    Zuo Xianzhang
    Hu Yongjiang
    [J]. ACTA OPTICA SINICA, 2018, 38 (11)