Cross-View Gait Recognition by Discriminative Feature Learning

被引:84
|
作者
Zhang, Yuqi [1 ]
Huang, Yongzhen [1 ]
Yu, Shiqi [2 ]
Wang, Liang [1 ]
机构
[1] Univ Chinese Acad Sci, Inst Automat, Ctr Res Intelligent Percept & Comp, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[2] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Gait recognition; Feature extraction; Three-dimensional displays; Generative adversarial networks; Face recognition; Deep learning; Clothing; discriminative feature learning; angle center loss; spatial-temporal features; DEEP; MOTION; MODEL;
D O I
10.1109/TIP.2019.2926208
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, deep learning-based cross-view gait recognition has become popular owing to the strong capacity of convolutional neural networks (CNNs). Current deep learning methods often rely on loss functions used widely in the task of face recognition, e.g., contrastive loss and triplet loss. These loss functions have the problem of hard negative mining. In this paper, a robust, effective, and gait-related loss function, called angle center loss (ACL), is proposed to learn discriminative gait features. The proposed loss function is robust to different local parts and temporal window sizes. Different from center loss which learns a center for each identity, the proposed loss function learns multiple sub-centers for each angle of the same identity. Only the largest distance between the anchor feature and the corresponding cross-view sub-centers is penalized, which achieves better intra-subject compactness. We also propose to extract discriminative spatialtemporal features by local feature extractors and a temporal attention model. A simplified spatial transformer network is proposed to localize the suitable horizontal parts of the human body. Local gait features for each horizontal part are extracted and then concatenated as the descriptor. We introduce long short-term memory (LSTM) units as the temporal attention model to learn the attention score for each frame, e.g., focusing more on discriminative frames and less on frames with bad quality. The temporal attention model shows better performance than the temporal average pooling or gait energy images (GEI). By combing the three aspects, we achieve state-of-the-art results on several cross-view gait recognition benchmarks.
引用
收藏
页码:1001 / 1015
页数:15
相关论文
共 50 条
  • [41] Quality-dependent View Transformation Model for Cross-view Gait Recognition
    Muramatsu, Daigo
    Makihara, Yasushi
    Yagi, Yasushi
    [J]. 2013 IEEE SIXTH INTERNATIONAL CONFERENCE ON BIOMETRICS: THEORY, APPLICATIONS AND SYSTEMS (BTAS), 2013,
  • [42] Couple Metric Learning Based on Separable Criteria with Its Application in Cross-View Gait Recognition
    Wang, Kejun
    Xing, Xianglei
    Yan, Tao
    Lv, Zhuowen
    [J]. BIOMETRIC RECOGNITION (CCBR 2014), 2014, 8833 : 347 - 356
  • [43] Cross-view action recognition by cross-domain learning
    Nie, Weizhi
    Liu, Anan
    Li, Wenhui
    Su, Yuting
    [J]. IMAGE AND VISION COMPUTING, 2016, 55 : 109 - 118
  • [44] Cross-view Action Recognition over Heterogeneous Feature Spaces
    Wu, Xinxiao
    Wang, Han
    Liu, Cuiwei
    Jia, Yunde
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 609 - 616
  • [45] Cross-View Gait Recognition Using Pairwise Spatial Transformer Networks
    Xu, Chi
    Makihara, Yasushi
    Li, Xiang
    Yagi, Yasushi
    Lu, Jianfeng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (01) : 260 - 274
  • [46] Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition
    Huang, Tianhuan
    Ben, Xianye
    Gong, Chen
    Zhang, Baochang
    Yan, Rui
    Wu, Qiang
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) : 6967 - 6980
  • [47] Cross-View Action Recognition Over Heterogeneous Feature Spaces
    Wu, Xinxiao
    Wang, Han
    Liu, Cuiwei
    Jia, Yunde
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) : 4096 - 4108
  • [48] Cross-view gait recognition by fusion of multiple transformation consistency measures
    Muramatsu, Daigo
    Makihara, Yasushi
    Yagi, Yasushi
    [J]. IET BIOMETRICS, 2015, 4 (02) : 62 - 73
  • [49] Cross-View Gait Recognition Based on Dual-Stream Network
    Zhao, Xiaoyan
    Zhang, Wenjing
    Zhang, Tianyao
    Zhang, Zhaohui
    [J]. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2021, 22 (05) : 671 - 678
  • [50] GaitDAN: Cross-View Gait Recognition via Adversarial Domain Adaptation
    Huang, Tianhuan
    Ben, Xianye
    Gong, Chen
    Xu, Wenzheng
    Wu, Qiang
    Zhou, Hongchao
    [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (09) : 8026 - 8040