View-invariant gait recognition based on kinect skeleton feature

被引:0
|
作者
Jiande Sun
Yufei Wang
Jing Li
Wenbo Wan
De Cheng
Huaxiang Zhang
机构
[1] Shandong Normal University,School of Information Science and Engineering
[2] Shandong Normal University,Institute of Data Science and Technology
[3] Shandong University,School of Information Science and Engineering
[4] Shandong Management University,School of Mechanical and Electrical Engineering
[5] Xi’an Jiaotong University,Institute of Artificial Intelligence and Robotics
来源
关键词
Gait Recognition; Second Generation Kinect; View-Invariant; 3D Joint Information; Gait Dataset;
D O I
暂无
中图分类号
学科分类号
摘要
Gait recognition is a popular remote biometric identification technology. Its robustness against view variation is one of the challenges in the field of gait recognition. In this paper, the second-generation Kinect (2G–Kinect) is used as a tool to build a 3D–skeleton-based gait dataset, which includes both 2D silhouette images captured by 2G–Kinect and their corresponding 3D coordinates of skeleton joints. Given this dataset, a human walking model is constructed. Referring to the walking model, the length of some specific skeletons is selected as the static features, and the angles of swing limbs as the dynamic features, which are verified to be view-invariant. In addition, the gait recognition abilities of the static and dynamic features are investigated respectively. Given the investigation, a view-invariant gait recognition scheme is proposed based on the matching-level-fusion of the static and dynamic features, and the nearest neighbor (NN) method is used for recognition. Comparison between the existing Kinect-based gait recognition method and the proposed one on different datasets show that the proposed one has better recognition performance.
引用
收藏
页码:24909 / 24935
页数:26
相关论文
共 50 条
  • [11] Joint Subspace Learning for View-Invariant Gait Recognition
    Liu, Nini
    Lu, Jiwen
    Tan, Yap-Peng
    IEEE SIGNAL PROCESSING LETTERS, 2011, 18 (07) : 431 - 434
  • [12] Fast and Robust Framework for View-invariant Gait Recognition
    Jia, Ning
    Li, Chang-Tsun
    Sanchez, Victor
    Liew, Alan Wee-Chung
    2017 5TH INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF 2017), 2017,
  • [13] Towards scalable view-invariant gait recognition: Multilinear analysis for gait
    Lee, CS
    Elgammal, A
    AUDIO AND VIDEO BASED BIOMETRIC PERSON AUTHENTICATION, PROCEEDINGS, 2005, 3546 : 395 - 405
  • [14] View-Invariant Gait Recognition Through Genetic Template Segmentation
    Isaac, Ebenezer R. H. P.
    Elias, Susan
    Rajagopalan, Srinivasan
    Easwarakumar, K. S.
    IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (08) : 1188 - 1192
  • [15] A View-invariant Skeleton Map with 3DCNN for Action Recognition
    Zhao, Yang
    Wen, Long
    Li, Shuguang
    Cheng, Hong
    Zhang, Chen
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 2128 - 2132
  • [16] GEINet: View-Invariant Gait Recognition Using a Convolutional Neural Network
    Shiraga, Kohei
    Makihara, Yasushi
    Muramatsu, Daigo
    Echigo, Tomio
    Yagi, Yasushi
    2016 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), 2016,
  • [17] View-Invariant Gait Recognition with Attentive Recurrent Learning of Partial Representations
    Sepas-Moghaddam A.
    Etemad A.
    IEEE Transactions on Biometrics, Behavior, and Identity Science, 2021, 3 (01): : 124 - 137
  • [18] View-invariant gait recognition system using a gait energy image decomposition method
    Verlekar, Tanmay T.
    Correia, Paulo L.
    Soares, Luis D.
    IET BIOMETRICS, 2017, 6 (04) : 299 - 306
  • [19] View-Invariant Gait Recognition Using a Joint-DLDA Framework
    Portillo, Jose
    Leyva, Roberto
    Sanchez, Victor
    Sanchez, Gabriel
    Perez-Meana, Hector
    Olivares, Jesus
    Toscano, Karina
    Nakano, Mariko
    TRENDS IN APPLIED KNOWLEDGE-BASED SYSTEMS AND DATA SCIENCE, 2016, 9799 : 398 - 408
  • [20] Skeleton Silhouette Based Disentangled Feature Extraction Network for Invariant Gait Recognition
    Yoo, Jae-Seok
    Park, Kwang-Hyun
    35TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2021), 2021, : 687 - 692