Mixformer: An improved self-attention architecture applied to multivariate chaotic time series

被引:0
|
作者
Fu, Ke [1 ]
Li, He [1 ]
Bai, Yan [1 ]
机构
[1] Northeastern Univ, Sch Mech Engn & Automat, Shenyang 110819, Peoples R China
关键词
Chaotic time series; Multivariate prediction; Information interaction; Global context modeling; Self-attention architecture; PREDICTION; NETWORK;
D O I
10.1016/j.eswa.2023.122484
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multivariate chaotic time series prediction has been a challenging problem, especially the coupling relationship between multiple variables need to be carefully considered. We propose a self-attention architecture with an information interaction module, called mixformer, applied to the multivariate chaotic time series prediction task. First, Based on rethinking the multivariate data structure, we compensate the deficiency of phase space reconstruction with a group of cross-convolution operator that can automatically update parameters, and propose an explicitly designed feature reconstruction module. Then, to address the problem of interactive fusion of series information and channel information, we propose an information interaction module that enables the feature communication by expanding and contracting dimensions. By breaking the communication barrier between series information and channel information, the feature representation capability is enhanced. Finally, we construct mixformer that combines locally sparse features and global context features. And notably, mixformer integrates the information interaction module and the feature reconstruction module to form a continuous solution. By comparing the results with existing models using multiple simulated data (Lorenz, Chen, and Rossler systems) and real-world application (power consumption), it is verified that our proposed model achieves surprising performance in practice.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting
    Huang, Siteng
    Wang, Donglin
    Wu, Xuehan
    Tang, Ao
    [J]. PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 2129 - 2132
  • [2] ALAE: self-attention reconstruction network for multivariate time series anomaly identification
    Jiang, Kai
    Liu, Hui
    Ruan, Huaijun
    Zhao, Jia
    Lin, Yuxiu
    [J]. SOFT COMPUTING, 2023, 27 (15) : 10509 - 10519
  • [3] Multiscale echo self-attention memory network for multivariate time series classification
    Lyu, Huizi
    Huang, Desen
    Li, Sen
    Ma, Qianli
    Ng, Wing W. Y.
    [J]. NEUROCOMPUTING, 2023, 520 : 60 - 72
  • [4] ALAE: self-attention reconstruction network for multivariate time series anomaly identification
    Kai Jiang
    Hui Liu
    Huaijun Ruan
    Jia Zhao
    Yuxiu Lin
    [J]. Soft Computing, 2023, 27 : 10509 - 10519
  • [5] Enhancing Multivariate Time Series Classifiers Through Self-Attention and Relative Positioning Infusion
    Abbasi, Mehryar
    Saeedi, Parvaneh
    [J]. IEEE ACCESS, 2024, 12 : 67273 - 67290
  • [6] Self-Attention Causal Dilated Convolutional Neural Network for Multivariate Time Series Classification and Its Application
    Yang, Wenbiao
    Xia, Kewen
    Wang, Zhaocheng
    Fan, Shurui
    Li, Ling
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 122
  • [7] Bridging Self-Attention and Time Series Decomposition for Periodic Forecasting
    Jiang, Song
    Syed, Tahin
    Zhu, Xuan
    Levy, Joshua
    Aronchik, Boris
    Sun, Yizhou
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3202 - 3211
  • [8] Self-attention for raw optical Satellite Time Series Classification
    Russwurm, Marc
    Koerner, Marco
    [J]. ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2020, 169 : 421 - 435
  • [9] An improved self-attention for long-sequence time-series data forecasting with missing values
    Zhang, Zhi-cheng
    Wang, Yong
    Peng, Jian-jian
    Duan, Jun-ting
    [J]. NEURAL COMPUTING & APPLICATIONS, 2024, 36 (08): : 3921 - 3940
  • [10] An improved self-attention for long-sequence time-series data forecasting with missing values
    Zhi-cheng Zhang
    Yong Wang
    Jian-jian Peng
    Jun-ting Duan
    [J]. Neural Computing and Applications, 2024, 36 (8) : 3921 - 3940