Recurrence and Self-attention vs the Transformer for Time-Series Classification: A Comparative Study

被引:7
|
作者
Katrompas, Alexander [1 ]
Ntakouris, Theodoros [2 ]
Metsis, Vangelis [1 ]
机构
[1] Texas State Univ, San Marcos, TX 78666 USA
[2] Univ Patras, Patras, Greece
关键词
D O I
10.1007/978-3-031-09342-5_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently the transformer has established itself as the state-of-the-art in text processing and has demonstrated impressive results in image processing, leading to the decline in the use of recurrence in neural network models. As established in the seminal paper, Attention Is All You Need, recurrence can be removed in favor of a simpler model using only self-attention. While transformers have shown themselves to be robust in a variety of text and image processing tasks, these tasks all have one thing in common; they are inherently non-temporal. Although transformers are also finding success in modeling time-series data, they also have their limitations as compared to recurrent models. We explore a class of problems involving classification and prediction from time-series data and show that recurrence combined with self-attention can meet or exceed the transformer architecture performance. This particular class of problem, temporal classification, and prediction of labels through time from time-series data is of particular importance to medical data sets which are often time-series based (Source code: https://github.com/imics-lab/recurrence-with-self-attention).
引用
下载
收藏
页码:99 / 109
页数:11
相关论文
共 50 条
  • [21] Bridging Self-Attention and Time Series Decomposition for Periodic Forecasting
    Jiang, Song
    Syed, Tahin
    Zhu, Xuan
    Levy, Joshua
    Aronchik, Boris
    Sun, Yizhou
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3202 - 3211
  • [22] Air Quality Prediction Based on Time Series Decomposition and Convolutional Sparse Self-Attention Mechanism Transformer Model
    Cao, Wenyi
    Qi, Weiwei
    Lu, Peiqi
    IEEE Access, 2024, 12 : 155340 - 155350
  • [23] Universal Graph Transformer Self-Attention Networks
    Dai Quoc Nguyen
    Tu Dinh Nguyen
    Dinh Phung
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 193 - 196
  • [24] Sparse self-attention transformer for image inpainting
    Huang, Wenli
    Deng, Ye
    Hui, Siqi
    Wu, Yang
    Zhou, Sanping
    Wang, Jinjun
    PATTERN RECOGNITION, 2024, 145
  • [25] SST: self-attention transformer for infrared deconvolution
    Gao, Lei
    Yan, Xiaohong
    Deng, Lizhen
    Xu, Guoxia
    Zhu, Hu
    INFRARED PHYSICS & TECHNOLOGY, 2024, 140
  • [26] Lite Vision Transformer with Enhanced Self-Attention
    Yang, Chenglin
    Wang, Yilin
    Zhang, Jianming
    Zhang, He
    Wei, Zijun
    Lin, Zhe
    Yuille, Alan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11988 - 11998
  • [27] Synthesizer: Rethinking Self-Attention for Transformer Models
    Tay, Yi
    Bahri, Dara
    Metzler, Donald
    Juan, Da-Cheng
    Zhao, Zhe
    Zheng, Che
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7192 - 7203
  • [28] Self-Attention Causal Dilated Convolutional Neural Network for Multivariate Time Series Classification and Its Application
    Yang, Wenbiao
    Xia, Kewen
    Wang, Zhaocheng
    Fan, Shurui
    Li, Ling
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 122
  • [29] Classification of Interbeat Interval Time-Series Using Attention Entropy
    Yang, Jiawei
    Choudhary, Gulraiz I.
    Rahardja, Susanto
    Franti, Pasi
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (01) : 321 - 330
  • [30] Cross Hyperspectral and LiDAR Attention Transformer: An Extended Self-Attention for Land Use and Land Cover Classification
    Roy, Swalpa Kumar
    Sukul, Atri
    Jamali, Ali
    Haut, Juan M.
    Ghamisi, Pedram
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 15