Spatial-temporal transformer for end-to-end sign language recognition

被引:5
|
作者
Cui, Zhenchao [1 ,2 ]
Zhang, Wenbo [1 ,2 ,3 ]
Li, Zhaoxin [3 ]
Wang, Zhaoqi [3 ]
机构
[1] Hebei Univ, Sch Cyber Secur & Comp, Baoding 071002, Hebei, Peoples R China
[2] Hebei Univ, Hebei Machine Vis Engn Res Ctr, Baoding 071002, Hebei, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Spatial-temporal encoder; Continuous sign language recognition; Transformer; Patched image; ATTENTION;
D O I
10.1007/s40747-023-00977-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continuous sign language recognition (CSLR) is an essential task for communication between hearing-impaired and people without limitations, which aims at aligning low-density video sequences with high-density text sequences. The current methods for CSLR were mainly based on convolutional neural networks. However, these methods perform poorly in balancing spatial and temporal features during visual feature extraction, making them difficult to improve the accuracy of recognition. To address this issue, we designed an end-to-end CSLR network: Spatial-Temporal Transformer Network (STTN). The model encodes and decodes the sign language video as a predicted sequence that is aligned with a given text sequence. First, since the image sequences are too long for the model to handle directly, we chunk the sign language video frames, i.e., "image to patch", which reduces the computational complexity. Second, global features of the sign language video are modeled at the beginning of the model, and the spatial action features of the current video frame and the semantic features of consecutive frames in the temporal dimension are extracted separately, giving rise to fully extracting visual features. Finally, the model uses a simple cross-entropy loss to align video and text. We extensively evaluated the proposed network on two publicly available datasets, CSL and RWTH-PHOENIX-Weather multi-signer 2014 (PHOENIX-2014), which demonstrated the superior performance of our work in CSLR task compared to the state-of-the-art methods.
引用
下载
收藏
页码:4645 / 4656
页数:12
相关论文
共 50 条
  • [1] Spatial–temporal transformer for end-to-end sign language recognition
    Zhenchao Cui
    Wenbo Zhang
    Zhaoxin Li
    Zhaoqi Wang
    Complex & Intelligent Systems, 2023, 9 : 4645 - 4656
  • [2] An End-to-End Spatial-Temporal Transformer Model for Surgical Action Triplet Recognition
    Zou, Xiaoyang
    Yu, Derong
    Tao, Rong
    Zheng, Guoyan
    12TH ASIAN-PACIFIC CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING, VOL 2, APCMBE 2023, 2024, 104 : 114 - 120
  • [3] Spatial-temporal feature-based End-to-end Fourier network for 3D sign language recognition
    Abdullahi, Sunusi Bala
    Chamnongthai, Kosin
    Bolon-Canedo, Veronica
    Cancela, Brais
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 248
  • [4] Sign Language Recognition Based on Spatial-Temporal Graph Convolution-Transformer
    Takayama, Natsuki
    Benitez-Garcia, Gibran
    Takahashi, Hiroki
    Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering, 2021, 87 (12): : 1028 - 1035
  • [5] End-to-End Video Object Detection with Spatial-Temporal Transformers
    He, Lu
    Zhou, Qianyu
    Li, Xiangtai
    Niu, Li
    Cheng, Guangliang
    Li, Xiao
    Liu, Wenxuan
    Tong, Yunhai
    Ma, Lizhuang
    Zhang, Liqing
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1507 - 1516
  • [6] Study and Generalization on an End-to-End Spatial-temporal Driving Model
    Yao, Tingting
    Chen, Xin
    Yuan, Sheng
    Wang, Huaying
    Guo, Lili
    Tian, Bin
    Ai, Yunfeng
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 4755 - 4760
  • [7] MyoSign: Enabling End-to-End Sign Language Recognition with Wearables
    Zhang, Qian
    Wang, Dong
    Zhao, Run
    Yu, Yinggang
    PROCEEDINGS OF IUI 2019, 2019, : 650 - 660
  • [8] End-to-end Flow Correlation Tracking with Spatial-temporal Attention
    Zhu, Zheng
    Wu, Wei
    Zou, Wei
    Yan, Junjie
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 548 - 557
  • [9] Hear Sign Language: A Real-Time End-to-End Sign Language Recognition System
    Wang, Zhibo
    Zhao, Tengda
    Ma, Jinxin
    Chen, Hongkai
    Liu, Kaixin
    Shao, Huajie
    Wang, Qian
    Ren, Ju
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (07) : 2398 - 2410
  • [10] Toward an End-to-End Voice to Sign Recognition for Dialect Moroccan Language
    Allak, Anass
    Benelallam, Imade
    Habbouza, Hamdi
    Amallah, Mohamed
    Lecture Notes on Data Engineering and Communications Technologies, 2022, 110 : 253 - 262