Transferring Cross-domain Knowledge for Video Sign Language Recognition

被引:67
|
作者
Li, Dongxu [1 ,2 ]
Yu, Xin [1 ,2 ,3 ]
Xu, Chenchen [1 ,4 ]
Petersson, Lars [1 ,4 ]
Li, Hongdong [1 ,2 ]
机构
[1] Australian Natl Univ, Canberra, ACT, Australia
[2] Australian Ctr Robot Vis ACRV, Brisbane, Qld, Australia
[3] Univ Technol Sydney, Sydney, NSW, Australia
[4] DATA61 CSIRO, Sydney, NSW, Australia
关键词
D O I
10.1109/CVPR42600.2020.00624
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Word-level sign language recognition (WSLR) is a fundamental task in sign language interpretation. It requires models to recognize isolated sign words from videos. However, annotating WSLR data needs expert knowledge, thus limiting WSLR dataset acquisition. On the contrary, there are abundant subtitled sign news videos on the internet. Since these videos have no word-level annotation and exhibit a large domain gap from isolated signs, they cannot be directly used for training WSLR models. We observe that despite the existence of large domain gaps, isolated and news signs share the same visual concepts, such as hand gestures and body movements. Motivated by this observation, we propose a novel method that learns domain-invariant descriptors and fertilizes WSLR models by transferring knowledge of subtitled news sign to them. To this end, we extract news signs using a base WSLR model, and then design a classifier jointly trained on news and isolated signs to coarsely align these two domains. In order to learn domain-invariant features within each class and suppress domain-specific features, our method further resorts to an external memory to store the class centroids of the aligned news signs. We then design a temporal attention based on the learnt descriptor to improve recognition performance. Experimental results on standard WSLR datasets show that our method outperforms previous state-of-the-art methods significantly. We also demonstrate the effectiveness of our method on automatically localizing signs from sign news, achieving 28.1 for AP@0.5.
引用
收藏
页码:6204 / 6213
页数:10
相关论文
共 50 条
  • [1] Cross Transferring Activity Recognition to Word Level Sign Language Detection
    Radhakrishnan, Srijith
    Mohan, Nikhil C.
    Varma, Manisimha
    Varma, Jaithra
    Pai, Smitha N.
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 2445 - 2452
  • [2] Cross-Domain NER using Cross-Domain Language Modeling
    Jia, Chen
    Liang, Xiaobo
    Zhang, Yue
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 2464 - 2474
  • [3] Cross-Domain Activity Recognition
    Zheng, Vincent Wenchen
    Hu, Derek Hao
    Yang, Qiang
    UBICOMP'09: PROCEEDINGS OF THE 11TH ACM INTERNATIONAL CONFERENCE ON UBIQUITOUS COMPUTING, 2009, : 61 - 70
  • [4] Dynamic video mix-up for cross-domain action recognition
    Wu, Han
    Song, Chunfeng
    Yue, Shaolong
    Wang, Zhenyu
    Xiao, Jun
    Liu, Yanyang
    Neurocomputing, 2022, 471 : 358 - 368
  • [5] Dynamic video mix-up for cross-domain action recognition
    Wu, Han
    Song, Chunfeng
    Yue, Shaolong
    Wang, Zhenyu
    Xiao, Jun
    Liu, Yanyang
    NEUROCOMPUTING, 2022, 471 : 358 - 368
  • [6] Cross-domain video action recognition via adaptive gradual learning
    Liu, Dan
    Bao, Zhenwei
    Mi, Jinpeng
    Gan, Yan
    Ye, Mao
    Zhang, Jianwei
    NEUROCOMPUTING, 2023, 556
  • [7] Gig: a knowledge-transferable-oriented framework for cross-domain recognition
    Teng, Luyao
    Tang, Feiyi
    Chang, Chao
    Zheng, Zefeng
    Li, Junxian
    MULTIMEDIA SYSTEMS, 2024, 30 (06)
  • [8] Cross-domain approaches to the language puzzle
    Ries, S.
    Fischer-Baum, S.
    51ST ACADEMY OF APHASIA PROCEEDINGS, 2013, 94 : 211 - 211
  • [9] Cross-Domain Human Action Recognition
    Bian, Wei
    Tao, Dacheng
    Rui, Yong
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2012, 42 (02): : 298 - 307
  • [10] Cross-domain human motion recognition
    Yang, Xianghan
    Xia, Zhaoyang
    Mo, Yinan
    Xu, Feng
    2021 SIGNAL PROCESSING SYMPOSIUM (SPSYMPO), 2021, : 300 - 304