Gender Recognition Based on Gradual and Ensemble Learning from Multi-View Gait Energy Images and Poses

被引:0
|
作者
Leung, Tak-Man [1 ]
Chan, Kwok-Leung [1 ]
机构
[1] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
关键词
gender recognition; gait energy image; posture; walking cycle; cascade network; ensemble learning; CLASSIFICATION; AGE;
D O I
10.3390/s23218961
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Image-based gender classification is very useful in many applications, such as intelligent surveillance, micromarketing, etc. One common approach is to adopt a machine learning algorithm to recognize the gender class of the captured subject based on spatio-temporal gait features extracted from the image. The image input can be generated from the video of the walking cycle, e.g., gait energy image (GEI). Recognition accuracy depends on the similarity of intra-class GEIs, as well as the dissimilarity of inter-class GEIs. However, we observe that, at some viewing angles, the GEIs of both gender classes are very similar. Moreover, the GEI does not exhibit a clear appearance of posture. We postulate that distinctive postures of the walking cycle can provide additional and valuable information for gender classification. This paper proposes a gender classification framework that exploits multiple inputs of the GEI and the characteristic poses of the walking cycle. The proposed framework is a cascade network that is capable of gradually learning the gait features from images acquired in multiple views. The cascade network contains a feature extractor and gender classifier. The multi-stream feature extractor network is trained to extract features from the multiple input images. Features are then fed to the classifier network, which is trained with ensemble learning. We evaluate and compare the performance of our proposed framework with state-of-the-art gait-based gender classification methods on benchmark datasets. The proposed framework outperforms other methods that only utilize a single input of the GEI or pose.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] MULTI-VIEW VISUAL SPEECH RECOGNITION BASED ON MULTI TASK LEARNING
    Han, HouJeung
    Kang, Sunghun
    Yoo, Chang D.
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 3983 - 3987
  • [32] Multi-View gait recognition method based on dynamic and static feature fusion
    Zhang Weihu
    Zhang Meng
    Wei Fan
    2018 INTERNATIONAL CONFERENCE ON SENSOR NETWORKS AND SIGNAL PROCESSING (SNSP 2018), 2018, : 287 - 291
  • [33] An ensemble approach to multi-view multi-instance learning
    Cano, Alberto
    KNOWLEDGE-BASED SYSTEMS, 2017, 136 : 46 - 57
  • [34] Object Recognition in Multi-View Dual Energy X-ray Images
    Bastan, Muhammet
    Byeon, Wonmin
    Breuel, Thomas M.
    PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2013, 2013,
  • [35] When multi-view classification meets ensemble learning
    Shi, Shaojun
    Nie, Feiping
    Wang, Rong
    Li, Xuelong
    NEUROCOMPUTING, 2022, 490 : 17 - 29
  • [36] A new approach for multi-view gait recognition on unconstrained paths
    Lopez-Fernandez, D.
    Madrid-Cuevas, F. J.
    Carmona-Poyato, A.
    Munoz-Salinas, R.
    Medina-Carnicer, R.
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 38 : 396 - 406
  • [37] A Multi-view Images Generation Method for Object Recognition
    Jin, Zhongxiao
    Cui, Guowei
    Chen, Guangda
    Chen, Xiaoping
    INTELLIGENT ROBOTICS AND APPLICATIONS (ICIRA 2018), PT II, 2018, 10985 : 313 - 323
  • [38] On Multi-View Face Recognition Using Lytro Images
    Chiesa, Valeria
    Dugelay, Jean-Luc
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 2250 - 2254
  • [39] From Ensemble Clustering to Multi-View Clustering
    Tao, Zhiqiang
    Liu, Hongfu
    Li, Sheng
    Ding, Zhengming
    Fu, Yun
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2843 - 2849
  • [40] Cross-view gait recognition through ensemble learning
    Wang, Xiuhui
    Yan, Wei Qi
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (11): : 7275 - 7287