Time Series Forecasting using Sequence-to-Sequence Deep Learning Framework

被引:33
|
作者
Du, Shengdong [1 ]
Li, Tianrui [1 ]
Horng, Shi-Jinn [2 ]
机构
[1] Southwest Jiaotong Univ, Sch Informat Sci & Technol, Chengdu 611756, Sichuan, Peoples R China
[2] Natl Taiwan Univ Sci & Technol, Dept Comp Sci & Informat Engn, Taipei, Taiwan
基金
中国国家自然科学基金;
关键词
Time series forecasting; LSTM; Encoder-decoder; PM2.5; Sequence-to-sequence deep learning; HYBRID;
D O I
10.1109/PAAP.2018.00037
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Time series forecasting has been regarded as a key research problem in various fields. such as financial forecasting, traffic flow forecasting, medical monitoring, intrusion detection, anomaly detection, and air quality forecasting etc. In this paper, we propose a sequence-to-sequence deep learning framework for multivariate time series forecasting, which addresses the dynamic, spatial-temporal and nonlinear characteristics of multivariate time series data by LSTM based encoder-decoder architecture. Through the air quality multivariate time series forecasting experiments, we show that the proposed model has better forecasting performance than classic shallow learning and baseline deep learning models. And the predicted PM2.5 value can be well matched with the ground truth value under single timestep and multi-timestep forward forecasting conditions. The experiment results show that our model is capable of dealing with multivariate time series forecasting with satisfied accuracy.
引用
收藏
页码:171 / 176
页数:6
相关论文
共 50 条
  • [1] A novel sequence-to-sequence based deep learning model for satellite cloud image time series prediction
    Lian, Jie
    Wu, Shixin
    Huang, Sirong
    Zhao, Qin
    [J]. ATMOSPHERIC RESEARCH, 2024, 306
  • [2] Deep Reinforcement Learning for Sequence-to-Sequence Models
    Keneshloo, Yaser
    Shi, Tian
    Ramakrishnan, Naren
    Reddy, Chandan K.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (07) : 2469 - 2489
  • [3] Foundations of Sequence-to-Sequence Modeling for Time Series
    Kuznetsov, Vitaly
    Mariet, Zelda
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 408 - 417
  • [4] Position-Based Content Attention for Time Series Forecasting with Sequence-to-Sequence RNNs
    Cinar, Yagmur Gizem
    Mirisaee, Hamid
    Goswami, Parantapa
    Gaussier, Eric
    Ait-Bachir, Ali
    Strijov, Vadim
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2017, PT V, 2017, 10638 : 533 - 544
  • [5] Long sequence time-series forecasting with deep learning: A survey
    Chen, Zonglei
    Ma, Minbo
    Li, Tianrui
    Wang, Hongjun
    Li, Chongshou
    [J]. INFORMATION FUSION, 2023, 97
  • [6] SeqOAE: Deep sequence-to-sequence orthogonal auto-encoder for time-series forecasting under variable population sizes
    Chehade, Abdallah
    Hassanieh, Wael
    Krivtsov, Vasiliy
    [J]. RELIABILITY ENGINEERING & SYSTEM SAFETY, 2024, 247
  • [7] Enhanced Sequence-to-Sequence Deep Transfer Learning for Day-Ahead Electricity Load Forecasting
    Laitsos, Vasileios
    Vontzos, Georgios
    Tsiovoulos, Apostolos
    Bargiotas, Dimitrios
    Tsoukalas, Lefteri H.
    [J]. ELECTRONICS, 2024, 13 (10)
  • [8] Sequence-to-Sequence Model with Attention for Time Series Classification
    Tang, Yujin
    Xu, Jianfeng
    Matsumoto, Kazunori
    Ono, Chihiro
    [J]. 2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2016, : 503 - 510
  • [9] Forecasting of Patient-Specific Kidney Transplant Function With a Sequence-to-Sequence Deep Learning Model
    Van Loon, Elisabet
    Zhang, Wanqiu
    Coemans, Maarten
    De Vos, Maarten
    Emonds, Marie-Paule
    Scheffner, Irina
    Gwinner, Wilfried
    Kuypers, Dirk
    Senev, Aleksandar
    Tinel, Claire
    Van Craenenbroeck, Amaryllis H.
    De Moor, Bart
    Naesens, Maarten
    [J]. JAMA NETWORK OPEN, 2021, 4 (12)
  • [10] Sequence-to-Sequence Deep Learning for Eye Movement Classification
    Startsev, Mikhail
    Agtzidis, Ioannis
    Dorr, Michael
    [J]. PERCEPTION, 2019, 48 : 200 - 200