Gaitdlf: global and local fusion for skeleton-based gait recognition in the wild

被引:0
|
作者
Wei, Siwei [1 ,2 ]
Liu, Weijie [1 ]
Wei, Feifei [3 ]
Wang, Chunzhi [1 ]
Xiong, Neal N. [4 ]
机构
[1] Hubei Univ Technol, Sch Comp Sci, Wuhan 430000, Peoples R China
[2] CCCC Second Highway Consultants Co Ltd, Wuhan 430056, Peoples R China
[3] Hubei Univ Econ, Sch Informat Management, Wuhan 430205, Peoples R China
[4] Northeastern State Univ, 611 N Grand Ave, Tulsa, OK 74464 USA
来源
JOURNAL OF SUPERCOMPUTING | 2024年 / 80卷 / 12期
基金
中国国家自然科学基金;
关键词
Gait recognition; Computer vision; Pattern recognition; Deep learning; NETWORKS; SCHEME;
D O I
10.1007/s11227-024-06089-7
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
A new trend in long-range biometrics, gait recognition, is finding application in a number of different fields including video surveillance. Recently, with the increase in robustness of the pose estimator and the presence of various unpredictable factors in realistic gait recognition, skeleton-based methods with higher robustness have emerged to better meet the challenging gait recognition needs. However, existing approaches primarily focus on extracting global skeletal features, neglecting the intricate motion information of local body parts and overlooking inter-limb relationships. Our solution to these challenges is the dynamic local fusion network (GaitDLF), a novel gait neural network for complex environments that includes a detail-aware stream in addition to the previous direct extraction of global skeleton features, which provides an enhanced representation of gait features. To extract discriminative local motion information, we introduce predefined body part assignments for each joint in the skeletal structure. By segmenting and mapping the overall skeleton based on these limb site divisions, limb-level motion features can be obtained. In addition, we will dynamically fuse the motion features from different limbs and enhance the motion feature representation of each limb by global context information and local context information of the limb-level motion features. The ability to extract gait features between individuals can be improved by aggregating local motion features from different body parts. Based on experiments on CASIA-B, Gait3D, and GREW, we show that our model extracts more comprehensive gait features than the state-of-the-art skeleton-based method, demonstrating that our method is better suited to detecting gait in complex environments in the wild than the appearance-based method.
引用
收藏
页码:17606 / 17632
页数:27
相关论文
共 50 条
  • [41] Global-Local Motion Transformer for Unsupervised Skeleton-Based Action Learning
    Kim, Boeun
    Chang, Hyung Jin
    Kim, Jungho
    Choi, Jin Young
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 209 - 225
  • [42] Global-local contrastive multiview representation learning for skeleton-based action
    Bian, Cunling
    Feng, Wei
    Meng, Fanbo
    Wang, Song
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 229
  • [43] Local and Non-local Context Graph Convolutional Networks for Skeleton-Based Action Recognition
    Gao, Zikai
    Zhao, Yang
    Han, Zhe
    Wang, Kang
    Dou, Yong
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT III, 2021, 12893 : 243 - 254
  • [44] RELATIONAL NETWORK FOR SKELETON-BASED ACTION RECOGNITION
    Zheng, Wu
    Li, Lin
    Zhang, Zhaoxiang
    Huang, Yan
    Wang, Liang
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 826 - 831
  • [45] Global-Local Motion Transformer for Unsupervised Skeleton-Based Action Learning
    Kim, Boeun
    Chang, Hyung Jin
    Kim, Jungho
    Choi, Jin Young
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13664 LNCS : 209 - 225
  • [46] SpatioTemporal focus for skeleton-based action recognition
    Wu, Liyu
    Zhang, Can
    Zou, Yuexian
    PATTERN RECOGNITION, 2023, 136
  • [47] Skeleton-based Dynamic hand gesture recognition
    De Smedt, Quentin
    Wannous, Hazem
    Vandeborre, Jean-Philippe
    PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, : 1206 - 1214
  • [48] Learning view-invariant features using stacked autoencoder for skeleton-based gait recognition
    Hasan, Md Mahedi
    Mustafa, Hossen Asiful
    IET COMPUTER VISION, 2021, 15 (07) : 527 - 545
  • [49] Multi-Stream Fusion Network for Skeleton-Based Construction Worker Action Recognition
    Tian, Yuanyuan
    Liang, Yan
    Yang, Haibin
    Chen, Jiayu
    SENSORS, 2023, 23 (23)
  • [50] Temporal-Channel Attention and Convolution Fusion for Skeleton-Based Human Action Recognition
    Liang, Chengwu
    Yang, Jie
    Du, Ruolin
    Hu, Wei
    Hou, Ning
    IEEE ACCESS, 2024, 12 : 64937 - 64948