Occlusion-Aware Human Mesh Model-Based Gait Recognition

被引:20
|
作者
Xu, Chi [1 ]
Makihara, Yasushi [1 ]
Li, Xiang [1 ]
Yagi, Yasushi [1 ]
机构
[1] Osaka Univ, Inst Sci & Ind Res, Osaka 5670047, Japan
关键词
Gait recognition; Feature extraction; Image reconstruction; Shape; Three-dimensional displays; Cameras; Videos; Partial occlusion; gait recognition; human mesh model; ROBUST; IDENTIFICATION;
D O I
10.1109/TIFS.2023.3236181
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Partial occlusion of the human body caused by obstacles or a limited camera field of view often occurs in surveillance videos, which affects the performance of gait recognition in practice. Existing methods for gait recognition against occlusion require a bounding box or the height of a full human body as a prerequisite, which is unobserved in occlusion scenarios. In this paper, we propose an occlusion-aware model-based gait recognition method that works directly on gait videos under occlusion without the above-mentioned prerequisite. Specifically, given a gait sequence that only contains non-occluded body parts in the images, we directly fit a skinned multi-person linear (SMPL)-based human mesh model to the input images without any pre-normalization or registration of the human body. We further use the pose and shape features extracted from the estimated SMPL model for recognition purposes, and use the extracted camera parameters in the occlusion attenuation module to reduce intra-subject variation in human model fitting caused by occlusion pattern differences. Experiments on occlusion samples simulated from the OU-MVLP dataset demonstrated the effectiveness of the proposed method, which outperformed state-of-the-art gait recognition methods by about 15% rank-1 identification rate and 2% equal error rate in the identification and verification scenarios, respectively.
引用
收藏
页码:1309 / 1321
页数:13
相关论文
共 50 条
  • [21] Neural Rays for Occlusion-aware Image-based Rendering
    Liu, Yuan
    Peng, Sida
    Liu, Lingjie
    Wang, Qianqian
    Wang, Peng
    Theobalt, Christian
    Zhou, Xiaowei
    Wang, Wenping
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7814 - 7823
  • [22] Shapecollage: Occlusion-Aware, Example-Based Shape Interpretation
    Cole, Forrester
    Isola, Phillip
    Freeman, William T.
    Durand, Fredo
    Adelson, Edward H.
    COMPUTER VISION - ECCV 2012, PT III, 2012, 7574 : 665 - 678
  • [23] Probabilistic Model-Based Silhouette Refinement for Gait Recognition
    张元元
    吴晓娟
    阮秋琦
    JournalofShanghaiJiaotongUniversity(Science), 2010, 15 (01) : 24 - 30
  • [24] Occlusion-aware Hand Posture Based Interaction on Tabletop Projector
    Fujinawa, Eisuke
    Goto, Kenji
    Irie, Atsushi
    Wu, Songtao
    Xu, Kuanhong
    ADJUNCT PUBLICATION OF THE 32ND ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY (UIST'19 ADJUNCT), 2019, : 113 - 115
  • [25] Model-based feature extraction for gait analysis and recognition
    Bouchrika, Imed
    Nixon, Mark S.
    COMPUTER VISION/COMPUTER GRAPHICS COLLABORATION TECHNIQUES, 2007, 4418 : 150 - +
  • [26] Probabilistic model-based silhouette refinement for gait recognition
    Zhang Y.-Y.
    Wu X.-J.
    Ruan Q.-Q.
    Journal of Shanghai Jiaotong University (Science), 2010, 15 (1) : 24 - 30
  • [27] Torso Orientation: A New Clue for Occlusion-Aware Human Pose Estimation
    Yu, Yang
    Yang, Baoyao
    Yuen, Pong C.
    2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2016, : 908 - 912
  • [28] CONet: Crowd and occlusion-aware network for occluded human pose estimation
    Bai, Xiuxiu
    Wei, Xing
    Wang, Zengying
    Zhang, Miao
    NEURAL NETWORKS, 2024, 172
  • [29] Solutions for model-based analysis of human gait
    Calow, R
    Michaelis, B
    Al-Hamadi, A
    PATTERN RECOGNITION, PROCEEDINGS, 2003, 2781 : 540 - 547
  • [30] VoteHMR: Occlusion-Aware Voting Network for Robust 3D Human Mesh Recovery from Partial Point Clouds
    Liu, Guanze
    Rong, Yu
    Sheng, Lu
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 955 - 964