Spatial-temporal transformer for end-to-end sign language recognition

被引:5
|
作者
Cui, Zhenchao [1 ,2 ]
Zhang, Wenbo [1 ,2 ,3 ]
Li, Zhaoxin [3 ]
Wang, Zhaoqi [3 ]
机构
[1] Hebei Univ, Sch Cyber Secur & Comp, Baoding 071002, Hebei, Peoples R China
[2] Hebei Univ, Hebei Machine Vis Engn Res Ctr, Baoding 071002, Hebei, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Spatial-temporal encoder; Continuous sign language recognition; Transformer; Patched image; ATTENTION;
D O I
10.1007/s40747-023-00977-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continuous sign language recognition (CSLR) is an essential task for communication between hearing-impaired and people without limitations, which aims at aligning low-density video sequences with high-density text sequences. The current methods for CSLR were mainly based on convolutional neural networks. However, these methods perform poorly in balancing spatial and temporal features during visual feature extraction, making them difficult to improve the accuracy of recognition. To address this issue, we designed an end-to-end CSLR network: Spatial-Temporal Transformer Network (STTN). The model encodes and decodes the sign language video as a predicted sequence that is aligned with a given text sequence. First, since the image sequences are too long for the model to handle directly, we chunk the sign language video frames, i.e., "image to patch", which reduces the computational complexity. Second, global features of the sign language video are modeled at the beginning of the model, and the spatial action features of the current video frame and the semantic features of consecutive frames in the temporal dimension are extracted separately, giving rise to fully extracting visual features. Finally, the model uses a simple cross-entropy loss to align video and text. We extensively evaluated the proposed network on two publicly available datasets, CSL and RWTH-PHOENIX-Weather multi-signer 2014 (PHOENIX-2014), which demonstrated the superior performance of our work in CSLR task compared to the state-of-the-art methods.
引用
下载
收藏
页码:4645 / 4656
页数:12
相关论文
共 50 条
  • [31] End-to-End Speech Recognition of Tamil Language
    Changrampadi, Mohamed Hashim
    Shahina, A.
    Narayanan, M. Badri
    Khan, A. Nayeemulla
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2022, 32 (02): : 1309 - 1323
  • [32] RhythmNet: End-to-End Heart Rate Estimation From Face via Spatial-Temporal Representation
    Niu, Xuesong
    Shan, Shiguang
    Han, Hu
    Chen, Xilin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 2409 - 2423
  • [33] Spatial-Temporal Routing for Supporting End-to-End Hard Deadlines in Multi-hop Networks
    Liu, Xin
    Ying, Lei
    2016 ANNUAL CONFERENCE ON INFORMATION SCIENCE AND SYSTEMS (CISS), 2016,
  • [34] Spatial-temporal routing for supporting end-to-end hard deadlines in multi-hop networks
    Liu, Xin
    Wang, Weichang
    Ying, Lei
    PERFORMANCE EVALUATION, 2019, 135
  • [35] RhythmNet: End-to-end Heart Rate Estimation from Face via Spatial-temporal Representation
    Niu, Xuesong
    Shan, Shiguang
    Han, Hu
    Chen, Xilin
    arXiv, 2019,
  • [36] Development of an End-to-End Deep Learning Framework for Sign Language Recognition, Translation, and Video Generation
    Natarajan, B.
    Rajalakshmi, E.
    Elakkiya, R.
    Kotecha, Ketan
    Abraham, Ajith
    Gabralla, Lubna Abdelkareim
    Subramaniyaswamy, V
    IEEE ACCESS, 2022, 10 : 104358 - 104374
  • [37] A comparison between end-to-end approaches and feature extraction based approaches for Sign Language recognition
    Oliveira, Marlon
    Chatbri, Houssem
    Little, Suzanne
    O'Connor, Noel E.
    Sutherland, Alistair
    2017 INTERNATIONAL CONFERENCE ON IMAGE AND VISION COMPUTING NEW ZEALAND (IVCNZ), 2017,
  • [38] End-to-End Neural Transformer Based Spoken Language Understanding
    Radfar, Martin
    Mouchtaris, Athanasios
    Kunzmann, Siegfried
    INTERSPEECH 2020, 2020, : 866 - 870
  • [39] END-TO-END MULTI-CHANNEL TRANSFORMER FOR SPEECH RECOGNITION
    Chang, Feng-Ju
    Radfar, Martin
    Mouchtaris, Athanasios
    King, Brian
    Kunzmann, Siegfried
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5884 - 5888
  • [40] Transformer-based end-to-end scene text recognition
    Zhu, Xinghao
    Zhang, Zhi
    PROCEEDINGS OF THE 2021 IEEE 16TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2021), 2021, : 1691 - 1695