A multi-step ahead global solar radiation prediction method using an attention-based transformer model with an interpretable mechanism

被引:21
|
作者
Zhou, Yong [1 ,2 ]
Li, Yizhuo [1 ]
Wang, Dengjia [2 ,3 ]
Liu, Yanfeng [2 ,3 ]
机构
[1] Xian Univ Architecture & Technol, Sch Management, 13 Yanta Rd, Xian 710055, Peoples R China
[2] Xi an Univ Architecture & Technol, State Key Lab Green Bldg Western China, 13 Yanta Rd, Xian 710055, Peoples R China
[3] Xian Univ Architecture & Technol, Sch Bldg Serv Sci & Engn, 13 Yanta Rd, Xian 710055, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Multi-step ahead; Global solar radiation; Model predictions; Sequence-to-sequence model; Transformer model; NEURAL-NETWORK; PERSISTENCE; STRATEGIES; FORECASTS; MACHINE; ZONES;
D O I
10.1016/j.ijhydene.2023.01.068
中图分类号
O64 [物理化学(理论化学)、化学物理学];
学科分类号
070304 ; 081704 ;
摘要
The conventional multi-step ahead solar radiation prediction method ignores the time -dependence of a future solar radiation time series. Therefore, according to sequence-to -sequence (seq2seq) model theory, this paper proposes the seq2seq long-and short-term memory model (seq2seq-LSTM), the seq2seq-LSTM model with an attention mechanism (seq2seq-at-LSTM), and a transformer model, which consists only of the attention mech-anism. The hourly global solar radiation data between 2016 and 2018 from Shaanxi, China, is used to train and validate the models. The results show that the introduction of the attention mechanism can effectively improve the prediction accuracy of the seq2seq-LSTM model. However, the model is still not very good at capturing the long-distance depen-dence of the solar radiation time series due to the inherent properties of LSTM. In com-parison, the transformer model, which is based entirely on the attention mechanism, performs much better at capturing the long-distance dependence of the solar radiation time series. Furthermore, as the number of time-steps increases, the performance of the solar radiation prediction decreases relatively smoothly and slowly. The obtained average coefficient of determination, root mean square error (RMSE), relative RMSE, and mean bias error are 0.9788, 72.91 W/m2, 25.25%, and 38.35 W/m2, respectively. In addition, the average skill score of the transformer model is around 44.9%, which is 20.54% higher than that of the seq2seq-at-LSTM model and about 40.84% higher than that of the seq2seq-LSTM model. Besides, the use of the attention mechanism can explain the improved prediction compared to other models. This model developed in this study could also be used for
引用
收藏
页码:15317 / 15330
页数:14
相关论文
共 50 条
  • [21] Graph Transformer and LSTM Attention for VNF Multi-Step Workload Prediction in SFC
    Wu, Yu
    Liu, Jiayi
    Wang, Chen
    Xie, Xuemei
    Shi, Guangming
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (04): : 4480 - 4493
  • [22] New direct multi-step ahead prediction model based on EMD and chaos analysis
    Xie, Jing-Xin
    Cheng, Chun-Tian
    Zhou, Gui-Hong
    Sun, Yu-Mei
    Zidonghua Xuebao/Acta Automatica Sinica, 2008, 34 (06): : 684 - 689
  • [23] Univariate model for hour ahead multi-step solar irradiance forecasting
    Gupta, Priya
    Singh, Rhythm
    2021 IEEE 48TH PHOTOVOLTAIC SPECIALISTS CONFERENCE (PVSC), 2021, : 494 - 501
  • [24] Multi-Step Parking Demand Prediction Model Based on Multi-Graph Convolutional Transformer
    Zhou, Yixiong
    Ye, Xiaofei
    Yan, Xingchen
    Wang, Tao
    Chen, Jun
    SYSTEMS, 2024, 12 (11):
  • [25] IoT traffic prediction using multi-step ahead prediction with neural network
    Abdellah, Ali R.
    Mahmood, Omar Abdul Kareem
    Paramonov, Alexander
    Koucheryavy, Andrey
    2019 11TH INTERNATIONAL CONGRESS ON ULTRA MODERN TELECOMMUNICATIONS AND CONTROL SYSTEMS AND WORKSHOPS (ICUMT), 2019,
  • [26] An Interpretable and Attention-Based Method for Gaze Estimation Using Electroencephalography
    Weng, Nina
    Plomecka, Martyna
    Kaufmann, Manuel
    Kastrati, Ard
    Wattenhofer, Roger
    Langer, Nicolas
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT II, 2023, 14221 : 734 - 743
  • [27] Multi-step ahead forecasting of daily global and direct solar radiation: A review and case study of Ghardaia region
    Guermoui, Mawloud
    Melgani, Farid
    Danilo, Celine
    JOURNAL OF CLEANER PRODUCTION, 2018, 201 : 716 - 734
  • [28] AIST: An Interpretable Attention-Based Deep Learning Model for Crime Prediction
    Rayhan, Yeasir
    Hashem, Tanzima
    ACM TRANSACTIONS ON SPATIAL ALGORITHMS AND SYSTEMS, 2023, 9 (02)
  • [29] Multi-Step Structure Image Inpainting Model with Attention Mechanism
    Ran, Cai
    Li, Xinfu
    Yang, Fang
    SENSORS, 2023, 23 (04)
  • [30] Multi-step short-term solar radiation prediction based on empirical mode decomposition and gated recurrent unit optimized via an attention mechanism
    Kong, Xiangfei
    Du, Xinyu
    Xue, Guixiang
    Xu, Zhijie
    ENERGY, 2023, 282