Cross-view gait recognition based on residual long short-term memory

被引:0
|
作者
Junqin Wen
Xiuhui Wang
机构
[1] Zhejiang Technical Institute of Economics,Key Laboratory of Electromagnetic Wave Information Technology and Metrology of Zhejiang Province, College of Information Engineering
[2] China Jiliang University,undefined
来源
关键词
Gait classification; Deep learning; Long short-term memory; Residual network;
D O I
暂无
中图分类号
学科分类号
摘要
As a promising biometric recognition technology, gait recognition has many advantages, such as non-invasive, easy to implement in a long distance, but it is very sensitive to the change of video acquisition angles. In this paper, we propose a novel cross-view gait recognition framework based on residual long short-term memory, namely, CVGR-RLSTM, to extract intrinsic gait features and carry out gait recognition. The proposed framework captures dependencies of human postures in time dimension during walking by inputting randomly sampling frame-by-frame gait energy images. The frame-by-frame gait energy images are generated by merging adjacent gait silhouette images sequentially, which integrates gait features of temporal and spatial dimensions to a certain extent. In the CVGR-RLSTM framework, the embedded residual module is used to further refine the spatial gait features, and the LSTM module is utilized to optimize the temporal gait features. To evaluate the proposed framework, we carried out a series of comparative experiments on the CASIA Dataset B and OU-ISIR LP Dataset. Experimental results show that the proposed method reaches the state-of-the-art level.
引用
收藏
页码:28777 / 28788
页数:11
相关论文
共 50 条
  • [1] Cross-view gait recognition based on residual long short-term memory
    Wen, Junqin
    Wang, Xiuhui
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (19) : 28777 - 28788
  • [2] Cross-View Gait Recognition Based on Feature Fusion
    Hong, Qi
    Wang, Zhongyuan
    Chen, Jianyu
    Huang, Baojin
    [J]. 2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 640 - 646
  • [3] Cross-View Gait Recognition Method Based on Multi-branch Residual Deep Network
    Hu, Shaohui
    Wang, Xiuhui
    Liu, Yanqiu
    [J]. Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2021, 34 (05): : 455 - 462
  • [4] Cross-view gait recognition based on a restrictive triplet network
    Tong, Sui-bing
    Fu, Yu-zhuo
    Ling, He-fei
    [J]. PATTERN RECOGNITION LETTERS, 2019, 125 : 212 - 219
  • [5] Dynamic Long Short-Term Memory Network for Skeleton-Based Gait Recognition
    Li, Jie
    Qi, Lin
    Zhao, Aite
    Chen, Xingnan
    Dong, Junyu
    [J]. 2017 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTED, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI), 2017,
  • [6] Cross-View Gait Recognition Based on U-Net
    Tifiini Alvarez, Israel Raul
    Sahonero-Alvarez, Guillermo
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [7] Cross-view gait recognition based on a restrictive triplet network
    Tong, Sui-bing
    Fu, Yu-zhuo
    Ling, He-fei
    [J]. Pattern Recognition Letters, 2019, 125 : 212 - 219
  • [8] Attention-Based Network for Cross-View Gait Recognition
    Huang, Yuanyuan
    Zhang, Jianfu
    Zhao, Haohua
    Zhang, Liqing
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2018), PT VII, 2018, 11307 : 489 - 498
  • [9] Cross-view gait recognition based on human walking trajectory
    Chen, Xian
    Yang, Tianqi
    Xu, Jiaming
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2014, 25 (08) : 1842 - 1855
  • [10] GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition
    Chao, Hanqing
    He, Yiwei
    Zhang, Junping
    Feng, Jianfeng
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 8126 - 8133