Optimizing dynamic time warping's window width for time series data mining applications

被引:52
|
作者
Hoang Anh Dau [1 ]
Silva, Diego Furtado [2 ]
Petitjean, Francois [3 ]
Forestier, Germain [4 ]
Bagnall, Anthony [5 ]
Mueen, Abdullah [6 ]
Keogh, Eamonn [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
[2] Univ Fed Sao Carlos, Sao Carlos, SP, Brazil
[3] Monash Univ, Melbourne, Vic, Australia
[4] Univ Haute Alsace, Mulhouse, France
[5] Univ East Anglia, Norwich, Norfolk, England
[6] Univ New Mexico, Albuquerque, NM 87131 USA
基金
澳大利亚研究理事会; 英国工程与自然科学研究理事会;
关键词
Time series; Clustering; Classification; Dynamic time warping; Semi-supervised learning; CLASSIFICATION;
D O I
10.1007/s10618-018-0565-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dynamic Time Warping (DTW) is a highly competitive distance measure for most time series data mining problems. Obtaining the best performance from DTW requires setting its only parameter, the maximum amount of warping (w). In the supervised case with ample data, w is typically set by cross-validation in the training stage. However, this method is likely to yield suboptimal results for small training sets. For the unsupervised case, learning via cross-validation is not possible because we do not have access to labeled data. Many practitioners have thus resorted to assuming that "the larger the better", and they use the largest value of w permitted by the computational resources. However, as we will show, in most circumstances, this is a na < ve approach that produces inferior clusterings. Moreover, the best warping window width is generally non-transferable between the two tasks, i.e., for a single dataset, practitioners cannot simply apply the best w learned for classification on clustering or vice versa. In addition, we will demonstrate that the appropriate amount of warping not only depends on the data structure, but also on the dataset size. Thus, even if a practitioner knows the best setting for a given dataset, they will likely be at a lost if they apply that setting on a bigger size version of that data. All these issues seem largely unknown or at least unappreciated in the community. In this work, we demonstrate the importance of setting DTW's warping window width correctly, and we also propose novel methods to learn this parameter in both supervised and unsupervised settings. The algorithms we propose to learn w can produce significant improvements in classification accuracy and clustering quality. We demonstrate the correctness of our novel observations and the utility of our ideas by testing them with more than one hundred publicly available datasets. Our forceful results allow us to make a perhaps unexpected claim; an underappreciated "low hanging fruit" in optimizing DTW's performance can produce improvements that make it an even stronger baseline, closing most or all the improvement gap of the more sophisticated methods proposed in recent years.
引用
收藏
页码:1074 / 1120
页数:47
相关论文
共 50 条
  • [31] Weighted dynamic time warping for time series classification
    Jeong, Young-Seon
    Jeong, Myong K.
    Omitaomu, Olufemi A.
    [J]. PATTERN RECOGNITION, 2011, 44 (09) : 2231 - 2240
  • [32] Downsampling of Time-series Data for Approximated Dynamic Time Warping on Nonvolatile Memories
    Li, Xingni
    Gu, Yi
    Huang, Po-Chun
    Liu, Duo
    Liang, Liang
    [J]. 2017 IEEE 6TH NON-VOLATILE MEMORY SYSTEMS AND APPLICATIONS SYMPOSIUM (NVMSA 2017), 2017,
  • [33] Inaccuracies of shape averaging method using dynamic time warping for time series data
    Niennattrakul, Vit
    Ratanamahatana, Chotirat Ann
    [J]. COMPUTATIONAL SCIENCE - ICCS 2007, PT 1, PROCEEDINGS, 2007, 4487 : 513 - +
  • [34] Combining raw and normalized data in multivariate time series classification with dynamic time warping
    Luczak, Maciej
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2018, 34 (01) : 373 - 380
  • [35] Invariant subspace learning for time series data based on dynamic time warping distance
    Deng, Huiqi
    Chen, Weifu
    Shen, Qi
    Ma, Andy J.
    Yuen, Pong C.
    Feng, Guocan
    [J]. PATTERN RECOGNITION, 2020, 102
  • [36] Similarity measure based on piecewise linear approximation and derivative dynamic time warping for time series mining
    Li, Haili
    Guo, Chonghui
    Qiu, Wangren
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2011, 38 (12) : 14732 - 14743
  • [37] Clustering time series with Granular Dynamic Time Warping method
    Yu, Fusheng
    Dong, Keqiang
    Chen, Fei
    Jiang, Yongke
    Zeng, Wenyi
    [J]. GRC: 2007 IEEE INTERNATIONAL CONFERENCE ON GRANULAR COMPUTING, PROCEEDINGS, 2007, : 393 - +
  • [38] Enhanced Weighted Dynamic Time Warping for Time Series Classification
    Anantasech, Pichamon
    Ratanamahatana, Chotirat Ann
    [J]. THIRD INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, 2019, 797 : 655 - 664
  • [39] On-Line Dynamic Time Warping for Streaming Time Series
    Oregi, Izaskun
    Perez, Aritz
    Del Ser, Javier
    Lozano, Jose A.
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2017, PT II, 2017, 10535 : 591 - 605
  • [40] Support Vector Machines and Dynamic Time Warping for Time Series
    Gudmundsson, Steinn
    Runarsson, Thomas Philip
    Sigurdsson, Sven
    [J]. 2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 2772 - +