Spatial-temporal transformer for end-to-end sign language recognition

被引:5
|
作者
Cui, Zhenchao [1 ,2 ]
Zhang, Wenbo [1 ,2 ,3 ]
Li, Zhaoxin [3 ]
Wang, Zhaoqi [3 ]
机构
[1] Hebei Univ, Sch Cyber Secur & Comp, Baoding 071002, Hebei, Peoples R China
[2] Hebei Univ, Hebei Machine Vis Engn Res Ctr, Baoding 071002, Hebei, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Spatial-temporal encoder; Continuous sign language recognition; Transformer; Patched image; ATTENTION;
D O I
10.1007/s40747-023-00977-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continuous sign language recognition (CSLR) is an essential task for communication between hearing-impaired and people without limitations, which aims at aligning low-density video sequences with high-density text sequences. The current methods for CSLR were mainly based on convolutional neural networks. However, these methods perform poorly in balancing spatial and temporal features during visual feature extraction, making them difficult to improve the accuracy of recognition. To address this issue, we designed an end-to-end CSLR network: Spatial-Temporal Transformer Network (STTN). The model encodes and decodes the sign language video as a predicted sequence that is aligned with a given text sequence. First, since the image sequences are too long for the model to handle directly, we chunk the sign language video frames, i.e., "image to patch", which reduces the computational complexity. Second, global features of the sign language video are modeled at the beginning of the model, and the spatial action features of the current video frame and the semantic features of consecutive frames in the temporal dimension are extracted separately, giving rise to fully extracting visual features. Finally, the model uses a simple cross-entropy loss to align video and text. We extensively evaluated the proposed network on two publicly available datasets, CSL and RWTH-PHOENIX-Weather multi-signer 2014 (PHOENIX-2014), which demonstrated the superior performance of our work in CSLR task compared to the state-of-the-art methods.
引用
下载
收藏
页码:4645 / 4656
页数:12
相关论文
共 50 条
  • [21] MULTITASK TRAINING WITH UNLABELED DATA FOR END-TO-END SIGN LANGUAGE FINGERSPELLING RECOGNITION
    Shi, Bowen
    Livescu, Karen
    2017 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2017, : 389 - 396
  • [22] Spatial-Temporal Graph Convolutional Networks for Sign Language Recognition
    de Amorim, Cleison Correia
    Macedo, David
    Zanchettin, Cleber
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: WORKSHOP AND SPECIAL SESSIONS, 2019, 11731 : 646 - 657
  • [23] Spatial-Temporal Enhanced Network for Continuous Sign Language Recognition
    Yin, Wenjie
    Hou, Yonghong
    Guo, Zihui
    Liu, Kailin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (03) : 1684 - 1695
  • [24] End-to-End Video Instance Segmentation via Spatial-Temporal Graph Neural Networks
    Wang, Tao
    Xu, Ning
    Chen, Kean
    Lin, Weiyao
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10777 - 10786
  • [25] Building an End-to-End Spatial-Temporal Convolutional Network for Video Super-Resolution
    Guo, Jun
    Chao, Hongyang
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4053 - 4060
  • [26] A study of transformer-based end-to-end speech recognition system for Kazakh language
    Mamyrbayev Orken
    Oralbekova Dina
    Alimhan Keylan
    Turdalykyzy Tolganay
    Othman Mohamed
    Scientific Reports, 12
  • [27] A study of transformer-based end-to-end speech recognition system for Kazakh language
    Mamyrbayev, Orken
    Oralbekova, Dina
    Alimhan, Keylan
    Turdalykyzy, Tolganay
    Othman, Mohamed
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [28] Adapting Transformer to End-to-end Spoken Language Translation
    Di Gangi, Mattia A.
    Negri, Matteo
    Turchi, Marco
    INTERSPEECH 2019, 2019, : 1133 - 1137
  • [29] Online Compressive Transformer for End-to-End Speech Recognition
    Leong, Chi-Hang
    Huang, Yu-Han
    Chien, Jen-Tzung
    INTERSPEECH 2021, 2021, : 2082 - 2086
  • [30] SimulSLT: End-to-End Simultaneous Sign Language Translation
    Yin, Aoxiong
    Zhao, Zhou
    Liu, Jinglin
    Jin, Weike
    Zhang, Meng
    Zeng, Xingshan
    He, Xiaofei
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4118 - 4127