Online learning for vector autoregressive moving-average time series prediction

被引:18
|
作者
Yang, Haimin [1 ]
Pan, Zhisong [1 ]
Tao, Qing [2 ]
Qiu, Junyang [3 ]
机构
[1] PLA Univ Sci & Technol, Coll Command & Informat Syst, Nanjing 210007, Jiangsu, Peoples R China
[2] Army Officer Acad PLA, Dept 11, Hefei 230031, Anhui, Peoples R China
[3] Deakin Univ, Sch Informat Technol, Geelong, Vic 3216, Australia
基金
中国国家自然科学基金;
关键词
Multivariate time series analysis; Online learning; Vector autoregressive moving-average; Time series prediction; Regret bound; LIKELIHOOD FUNCTION;
D O I
10.1016/j.neucom.2018.04.011
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multivariate time series analysis considers simultaneously multiple time series, which is much more complicated than the univariate time series analysis in general. VARMA (vector autoregressive moving-average) is one of the most mainstream multivariate time series model for time series prediction. However, the parameters of VARMA are often estimated in a batch manner in traditional multivariate statistical analysis and the noise terms are assumed Gaussian. In addition, the batch methods cannot perform satisfactorily in real-time prediction and the noise terms are unknown to us in real world. In this paper, we propose a novel online time series prediction framework for VARMA. We prove that VAR (vector autoregressive) can be used to mimic the underlying VARMA model under the online settings. Under this framework, we develop two effective algorithms VARMA-OGD and VARMA-ONS for this prediction problem assuming that the noise terms are generated stochastically and independently. The VARMA-OGD algorithm is based on the OGD (online gradient descent) algorithm, which is valid for general convex loss function. While the VARMA-ONS algorithm adopting the ONS (online newton step) algorithm is only valid for exp-concave loss function. Theoretical analysis shows that the regret bounds against the best VARMA prediction in hindsight of the proposed algorithms are sublinear of the number of samples. Furthermore, our experimental results further validate the effectiveness and robustness of our algorithms. (c) 2018 Published by Elsevier B.V.
引用
收藏
页码:9 / 17
页数:9
相关论文
共 50 条