A Generative Adversarial Network with an Attention Spatiotemporal Mechanism for Tropical Cyclone Forecasts

被引:0
|
作者
Li, Xiaohui [1 ]
Han, Xinhai [1 ,2 ]
Yang, Jingsong [1 ,2 ,3 ]
Wang, Jiuke [3 ,4 ]
Han, Guoqi [5 ]
Ding, Jun [6 ]
Shen, Hui [6 ]
Yan, Jun [6 ]
机构
[1] Minist Nat Resources, Inst Oceanog 2, Satellite Ocean Environm Dynam, Hangzhou 310012, Peoples R China
[2] Shanghai Jiao Tong Univ, Sch Oceanog, Shanghai 200240, Peoples R China
[3] Southern Marine Sci & Engn Guangdong Lab Zhuhai, Zhuhai 519082, Peoples R China
[4] Sun Yat Sen Univ, Sch Artificial Intelligence, Zhuhai 519082, Peoples R China
[5] Fisheries & Oceans Canada, Inst Ocean Sci, Sidney, BC V8L 4B2, Canada
[6] Zhejiang Marine Monitoring & Forecasting Ctr, Hangzhou 310007, Peoples R China
基金
中国国家自然科学基金;
关键词
tropical cyclones; spatiotemporal prediction; generative adversarial network; attention spatiotemporal mechanism; deep learning; (sic)(sic)(sic)(sic); (sic)(sic)(sic)(sic)(sic)(sic); (sic)(sic)(sic)(sic)(sic)(sic)(sic);
D O I
10.1007/s00376-024-3243-6
中图分类号
P4 [大气科学(气象学)];
学科分类号
0706 ; 070601 ;
摘要
Tropical cyclones (TCs) are complex and powerful weather systems, and accurately forecasting their path, structure, and intensity remains a critical focus and challenge in meteorological research. In this paper, we propose an Attention Spatio-Temporal predictive Generative Adversarial Network (AST-GAN) model for predicting the temporal and spatial distribution of TCs. The model forecasts the spatial distribution of TC wind speeds for the next 15 hours at 3-hour intervals, emphasizing the cyclone's center, high wind-speed areas, and its asymmetric structure. To effectively capture spatiotemporal feature transfer at different time steps, we employ a channel attention mechanism for feature selection, enhancing model performance and reducing parameter redundancy. We utilized High-Resolution Weather Research and Forecasting (HWRF) data to train our model, allowing it to assimilate a wide range of TC motion patterns. The model is versatile and can be applied to various complex scenarios, such as multiple TCs moving simultaneously or TCs approaching landfall. Our proposed model demonstrates superior forecasting performance, achieving a root-mean-square error (RMSE) of 0.71 m s-1 for overall wind speed and 2.74 m s-1 for maximum wind speed when benchmarked against ground truth data from HWRF. Furthermore, the model underwent optimization and independent testing using ERA5 reanalysis data, showcasing its stability and scalability. After fine-tuning on the ERA5 dataset, the model achieved an RMSE of 1.33 m s-1 for wind speed and 1.75 m s-1 for maximum wind speed. The AST-GAN model outperforms other state-of-the-art models in RMSE on both the HWRF and ERA5 datasets, maintaining its superior performance and demonstrating its effectiveness for spatiotemporal prediction of TCs. (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic),(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic). (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(AST-GAN)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic). (sic)(sic)(sic)(sic)(sic)(sic)15(sic)(sic)(sic)(sic)(sic)3(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic),(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic). (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic). (sic)(sic)(sic)(sic)HWRF(sic)(sic)(sic)(sic)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic). (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic). (sic)HWRF(sic)(sic)(sic)(sic)(sic)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(RMSE)(sic)0.71 m s-1, (sic)(sic)(sic)(sic)(sic)RMSE(sic)2.74 m s-1, (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic). (sic)(sic), (sic)(sic)(sic)(sic)(sic)ERA5(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic). (sic)(sic)ERA5(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic), (sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)(sic)RMSE(sic)(sic)(sic)(sic)1.33 m s-1 (sic)1.75 m s-1.
引用
收藏
页码:67 / 78
页数:12
相关论文
共 50 条
  • [31] A hybrid attention generative adversarial network for Chinese landscape painting
    Lyu, Qiongshuai
    Zhao, Na
    Sun, Zhiyuan
    Yang, Yu
    Zhang, Chi
    Shi, Ruolin
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [32] SELF-ATTENTION GENERATIVE ADVERSARIAL NETWORK FOR SPEECH ENHANCEMENT
    Huy Phan
    Nguyen, Huy Le
    Chen, Oliver Y.
    Koch, Philipp
    Duong, Ngoc Q. K.
    McLoughlin, Ian
    Mertins, Alfred
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7103 - 7107
  • [33] Image motion deblurring via attention generative adversarial network
    Zhang, Yucun
    Li, Tao
    Li, Qun
    Fu, Xianbin
    Kong, Tao
    COMPUTERS & GRAPHICS-UK, 2023, 111 : 122 - 132
  • [34] Self-attention generative adversarial network with the conditional constraint
    Jia Y.
    Ma L.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (06): : 163 - 170
  • [35] Generative attention adversarial classification network for unsupervised domain adaptation
    Chen, Wendong
    Hu, Haifeng
    PATTERN RECOGNITION, 2020, 107 (107)
  • [36] Boosting attention fusion generative adversarial network for image denoising
    Qiongshuai Lyu
    Min Guo
    Miao Ma
    Neural Computing and Applications, 2021, 33 : 4833 - 4847
  • [37] Boosting attention fusion generative adversarial network for image denoising
    Lyu, Qiongshuai
    Guo, Min
    Ma, Miao
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (10): : 4833 - 4847
  • [38] Cooperative attention generative adversarial network for unsupervised domain adaptation
    Fu, Shuai
    Chen, Jing
    Lei, Liang
    KNOWLEDGE-BASED SYSTEMS, 2023, 261
  • [39] Generative Adversarial Network with Spatial Attention for Face Attribute Editing
    Zhang, Gang
    Kan, Meina
    Shan, Shiguang
    Chen, Xilin
    COMPUTER VISION - ECCV 2018, PT VI, 2018, 11210 : 422 - 437
  • [40] Multi-Attention Generative Adversarial Network for image captioning
    Wei, Yiwei
    Wang, Leiquan
    Cao, Haiwen
    Shao, Mingwen
    Wu, Chunlei
    NEUROCOMPUTING, 2020, 387 : 91 - 99