Recurrence and Self-attention vs the Transformer for Time-Series Classification: A Comparative Study

被引:7
|
作者
Katrompas, Alexander [1 ]
Ntakouris, Theodoros [2 ]
Metsis, Vangelis [1 ]
机构
[1] Texas State Univ, San Marcos, TX 78666 USA
[2] Univ Patras, Patras, Greece
关键词
D O I
10.1007/978-3-031-09342-5_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently the transformer has established itself as the state-of-the-art in text processing and has demonstrated impressive results in image processing, leading to the decline in the use of recurrence in neural network models. As established in the seminal paper, Attention Is All You Need, recurrence can be removed in favor of a simpler model using only self-attention. While transformers have shown themselves to be robust in a variety of text and image processing tasks, these tasks all have one thing in common; they are inherently non-temporal. Although transformers are also finding success in modeling time-series data, they also have their limitations as compared to recurrent models. We explore a class of problems involving classification and prediction from time-series data and show that recurrence combined with self-attention can meet or exceed the transformer architecture performance. This particular class of problem, temporal classification, and prediction of labels through time from time-series data is of particular importance to medical data sets which are often time-series based (Source code: https://github.com/imics-lab/recurrence-with-self-attention).
引用
下载
收藏
页码:99 / 109
页数:11
相关论文
共 50 条
  • [31] Deformable Self-Attention for Text Classification
    Ma, Qianli
    Yan, Jiangyue
    Lin, Zhenxi
    Yu, Liuhong
    Chen, Zipeng
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 1570 - 1581
  • [33] Applying Self-attention for Stance Classification
    Bugueno, Margarita
    Mendoza, Marcelo
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS (CIARP 2019), 2019, 11896 : 51 - 61
  • [34] RsMmFormer: Multimodal Transformer Using Multiscale Self-attention for Remote Sensing Image Classification
    Zhang, Bo
    Ming, Zuheng
    Liu, Yaqian
    Feng, Wei
    He, Liang
    Zhao, Kaixing
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT I, 2024, 14473 : 329 - 339
  • [35] Evaluating the effectiveness of self-attention mechanism in tuberculosis time series forecasting
    Zhihong Lv
    Rui Sun
    Xin Liu
    Shuo Wang
    Xiaowei Guo
    Yuan Lv
    Min Yao
    Junhua Zhou
    BMC Infectious Diseases, 24 (1)
  • [36] TransDBC: Transformer for Multivariate Time-Series based Driver Behavior Classification
    Vyas, Jayant
    Bhardwaj, Nishit
    Bhumika
    Das, Debasis
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [37] DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting
    Huang, Siteng
    Wang, Donglin
    Wu, Xuehan
    Tang, Ao
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 2129 - 2132
  • [38] Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention
    Pan, Xuran
    Ye, Tianzhu
    Xia, Zhuofan
    Song, Shiji
    Huang, Gao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2082 - 2091
  • [39] Local self-attention in transformer for visual question answering
    Shen, Xiang
    Han, Dezhi
    Guo, Zihan
    Chen, Chongqing
    Hua, Jie
    Luo, Gaofeng
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16706 - 16723
  • [40] Tree Transformer: Integrating Tree Structures into Self-Attention
    Wang, Yau-Shian
    Lee, Hung-Yi
    Chen, Yun-Nung
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 1061 - 1070