Spatial-temporal transformer for end-to-end sign language recognition

被引:5
|
作者
Cui, Zhenchao [1 ,2 ]
Zhang, Wenbo [1 ,2 ,3 ]
Li, Zhaoxin [3 ]
Wang, Zhaoqi [3 ]
机构
[1] Hebei Univ, Sch Cyber Secur & Comp, Baoding 071002, Hebei, Peoples R China
[2] Hebei Univ, Hebei Machine Vis Engn Res Ctr, Baoding 071002, Hebei, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Spatial-temporal encoder; Continuous sign language recognition; Transformer; Patched image; ATTENTION;
D O I
10.1007/s40747-023-00977-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continuous sign language recognition (CSLR) is an essential task for communication between hearing-impaired and people without limitations, which aims at aligning low-density video sequences with high-density text sequences. The current methods for CSLR were mainly based on convolutional neural networks. However, these methods perform poorly in balancing spatial and temporal features during visual feature extraction, making them difficult to improve the accuracy of recognition. To address this issue, we designed an end-to-end CSLR network: Spatial-Temporal Transformer Network (STTN). The model encodes and decodes the sign language video as a predicted sequence that is aligned with a given text sequence. First, since the image sequences are too long for the model to handle directly, we chunk the sign language video frames, i.e., "image to patch", which reduces the computational complexity. Second, global features of the sign language video are modeled at the beginning of the model, and the spatial action features of the current video frame and the semantic features of consecutive frames in the temporal dimension are extracted separately, giving rise to fully extracting visual features. Finally, the model uses a simple cross-entropy loss to align video and text. We extensively evaluated the proposed network on two publicly available datasets, CSL and RWTH-PHOENIX-Weather multi-signer 2014 (PHOENIX-2014), which demonstrated the superior performance of our work in CSLR task compared to the state-of-the-art methods.
引用
下载
收藏
页码:4645 / 4656
页数:12
相关论文
共 50 条
  • [41] Semantic Mask for Transformer based End-to-End Speech Recognition
    Wang, Chengyi
    Wu, Yu
    Du, Yujiao
    Li, Jinyu
    Liu, Shujie
    Lu, Liang
    Ren, Shuo
    Ye, Guoli
    Zhao, Sheng
    Zhou, Ming
    INTERSPEECH 2020, 2020, : 971 - 975
  • [42] An End-to-End Air Writing Recognition Method Based on Transformer
    Tan, Xuhang
    Tong, Jicheng
    Matsumaru, Takafumi
    Dutta, Vibekananda
    He, Xin
    IEEE ACCESS, 2023, 11 : 109885 - 109898
  • [43] END-TO-END MULTI-SPEAKER SPEECH RECOGNITION WITH TRANSFORMER
    Chang, Xuankai
    Zhang, Wangyou
    Qian, Yanmin
    Le Roux, Jonathan
    Watanabe, Shinji
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6134 - 6138
  • [44] Gloss-Free End-to-End Sign Language Translation
    Lin, Kezhou
    Wang, Xiaohan
    Zhu, Linchao
    Sun, Ke
    Zhang, Bang
    Yang, Yi
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 12904 - 12916
  • [45] Residual Language Model for End-to-end Speech Recognition
    Tsunoo, Emiru
    Kashiwagi, Yosuke
    Narisetty, Chaitanya
    Watanabe, Shinji
    INTERSPEECH 2022, 2022, : 3899 - 3903
  • [46] End-to-End Spatial Transform Face Detection and Recognition
    Zhang H.
    Chi L.
    Virtual Reality and Intelligent Hardware, 2020, 2 (02): : 119 - 131
  • [47] Improving Transformer Based End-to-End Code-Switching Speech Recognition Using Language Identification
    Huang, Zheying
    Wang, Pei
    Wang, Jian
    Miao, Haoran
    Xu, Ji
    Zhang, Pengyuan
    APPLIED SCIENCES-BASEL, 2021, 11 (19):
  • [48] Spatial-Temporal Graph Transformer With Sign Mesh Regression for Skinned-Based Sign Language Production
    Cui, Zhenchao
    Chen, Ziang
    Li, Zhaoxin
    Wang, Zhaoqi
    IEEE ACCESS, 2022, 10 : 127530 - 127539
  • [49] Spatial-Temporal Graph Transformer with Sign Mesh Regression for Skinned-Based Sign Language Production
    Cui, Zhenchao
    Chen, Ziang
    Li, Zhaoxin
    Wang, Zhaoqi
    IEEE Access, 2022, 10 : 127530 - 127539
  • [50] Structure-aware sign language recognition with spatial-temporal scene graph
    Lin, Shiquan
    Xiao, Zhengye
    Wang, Lixin
    Wan, Xiuan
    Ni, Lan
    Fang, Yuchun
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (06)