MASA: Motion-Aware Masked Autoencoder With Semantic Alignment for Sign Language Recognition

被引:0
|
作者
Zhao, Weichao [1 ]
Hu, Hezhen [2 ]
Zhou, Wengang [1 ]
Mao, Yunyao [1 ]
Wang, Min [3 ]
Li, Houqiang [1 ]
机构
[1] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, CAS Key Lab Technol Geospatial Informat Proc & Ap, Hefei 230027, Peoples R China
[2] Univ Texas Austin, Visual Informat Grp, Austin, TX 78705 USA
[3] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei 230030, Peoples R China
基金
中国国家自然科学基金;
关键词
Masked autoencoder; motion-aware; semantic alignment; sign language recognition;
D O I
10.1109/TCSVT.2024.3409728
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Sign language recognition (SLR) has long been plagued by insufficient model representation capabilities. Although current pre-training approaches have alleviated this dilemma to some extent and yielded promising performance by employing various pretext tasks on sign pose data, these methods still suffer from two primary limitations: i) Explicit motion information is usually disregarded in previous pretext tasks, leading to partial information loss and limited representation capability. ii) Previous methods focus on the local context of a sign pose sequence, without incorporating the guidance of the global meaning of lexical signs. To this end, we propose a MotionAware masked autoencoder with Semantic Alignment (MASA) that integrates rich motion cues and global semantic information in a self-supervised learning paradigm for SLR. Our framework contains two crucial components, i.e., a motion-aware masked autoencoder (MA) and a momentum semantic alignment module (SA). Specifically, in MA, we introduce an autoencoder architecture with a motion-aware masked strategy to reconstruct motion residuals of masked frames, thereby explicitly exploring dynamic motion cues among sign pose sequences. Moreover, in SA, we embed our framework with global semantic awareness by aligning the embeddings of different augmented samples from the input sequence in the shared latent space. In this way, our framework can simultaneously learn local motion cues and global semantic features for comprehensive sign language representation. Furthermore, we conduct extensive experiments to validate the effectiveness of our method, achieving new stateof-the-art performance on four public benchmarks. The source code are publicly available at https://github.com/sakura/MASA.
引用
收藏
页码:10793 / 10804
页数:12
相关论文
共 50 条
  • [41] Fist American Sign Language Recognition Using Leap Motion Sensor
    Chophuk, Ponlawat
    Pattanaworapan, Kanjana
    Chamnongthai, Kosin
    2018 INTERNATIONAL WORKSHOP ON ADVANCED IMAGE TECHNOLOGY (IWAIT), 2018,
  • [42] Inertial Motion Sensing Glove for Sign Language Gesture Acquisition and Recognition
    Galka, Jakub
    Masior, Mariusz
    Zaborski, Mateusz
    Barczewska, Katarzyna
    IEEE SENSORS JOURNAL, 2016, 16 (16) : 6310 - 6316
  • [43] Turkish Sign Language Recognition Application Using Motion History Image
    Yalcinkaya, Ozge
    Atvar, Anil
    Duygulu, Pinar
    2016 24TH SIGNAL PROCESSING AND COMMUNICATION APPLICATION CONFERENCE (SIU), 2016, : 801 - 804
  • [44] Chinese sign language recognition based on surface electromyography and motion information
    Li, Wenyu
    Luo, Zhizeng
    Li, Wenguo
    Xi, Xugang
    PLOS ONE, 2023, 18 (12):
  • [45] A parallel multistream model for integration of sign language recognition and lip motion
    Ma, JY
    Gao, W
    Wang, R
    ADVANCES IN MULTIMODAL INTERFACES - ICMI 2000, PROCEEDINGS, 2000, 1948 : 582 - 589
  • [46] Greek sign language alphabet recognition using the leap motion device
    Simos, Merkourios
    Nikolaidis, Nikolaos
    9TH HELLENIC CONFERENCE ON ARTIFICIAL INTELLIGENCE (SETN 2016), 2016,
  • [47] User-dependent Sign Language Recognition Using Motion Detection
    Hassan, Mohamed
    Assaleh, Khaled
    Shanableh, Tamer
    2016 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE & COMPUTATIONAL INTELLIGENCE (CSCI), 2016, : 852 - 856
  • [48] Deep motion templates and extreme learning machine for sign language recognition
    Imran, Javed
    Raman, Balasubramanian
    VISUAL COMPUTER, 2020, 36 (06): : 1233 - 1246
  • [49] EMCC: Enhancement of Motion Chain Code for Arabic Sign Language Recognition
    Abdo, Mahmoud Zaki
    Salem, Sameh Abd El-Rahman
    Hamdy, Alaa Mahmoud
    Saad, Elsayed Mostafa
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2015, 6 (12) : 109 - 117
  • [50] Deep motion templates and extreme learning machine for sign language recognition
    Javed Imran
    Balasubramanian Raman
    The Visual Computer, 2020, 36 : 1233 - 1246