An Ensemble of Invariant Features for Person Re-identification

被引:0
|
作者
Chen, Shen-Chi [1 ]
Lee, Young-Gun [2 ]
Hwang, Jenq-Neng [2 ]
Hung, Yi-Ping [1 ]
Yoo, Jang-Hee [3 ]
机构
[1] Natl Taiwan Univ, Dept Comp Sci & Informat Engn, 1 Sec 4,Roosevelt Rd, Taipei 10617, Taiwan
[2] Univ Washington, Dept Elect Engn, Seattle, WA 98195 USA
[3] ETRI, SW Content Res Lab, Daejeon 305700, South Korea
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We propose an ensemble of invariant features for person re-identification. The proposed method requires no domain learning and can effectively overcome the issues created by the variations of human poses and viewpoint between a pair of different cameras. Our ensemble model utilizes both holistic and region-based features. To avoid the misalignment problem, the test human object sample is used to generate multiple virtual samples, by applying slight geometric distortion. The holistic features are extracted from a publically available pre-trained deep convolutional neural network. On the other hand, the region-based features are based on our proposed Two-Way Gaussian Mixture Model Fitting and the Completed Local Binary Pattern texture representations. To make better generalization during the matching without additional learning processes for the feature aggregation, the ensemble scheme combines all three feature distances using distances normalization. The proposed framework achieves robustness against partial occlusion, pose and viewpoint changes. In addition, the experimental results show that our method exceeds the state of the art person re-identification performance based on the challenging benchmark 3DPeS.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Learning camera invariant deep features for semi-supervised person re-identification
    Zhu, Hui
    Huang, Lei
    Wei, Zhiqiang
    Zhang, Wenfeng
    Cai, Huanhuan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (13) : 18671 - 18692
  • [22] Learning enhancing modality-invariant features for visible-infrared person re-identification
    Zhang, La
    Zhao, Xu
    Du, Haohua
    Sun, Jian
    Wang, Jinqiao
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (01) : 55 - 73
  • [23] Camera Invariant Feature Learning for Unsupervised Person Re-Identification
    Pang, Zhiqi
    Zhao, Lingling
    Liu, Qiuyang
    Wang, Chunyu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6171 - 6182
  • [24] Person Re-Identification with Discriminatively Trained Viewpoint Invariant Dictionaries
    Karanam, Srikrishna
    Li, Yang
    Radke, Richard J.
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 4516 - 4524
  • [25] SCALE-INVARIANT SIAMESE NETWORK FOR PERSON RE-IDENTIFICATION
    Zhang, Yunzhou
    Shi, Weidong
    Liu, Shuangwei
    Bao, Jining
    Wei, Ying
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2436 - 2440
  • [26] Apparel-Invariant Feature Learning for Person Re-Identification
    Yu, Zhengxu
    Zhao, Yilun
    Hong, Bin
    Jin, Zhongming
    Huang, Jianqiang
    Cai, Deng
    Hua, Xian-Sheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 4482 - 4492
  • [27] DeNet: An Explicit Distance Ensemble Model for Person Re-identification
    Wang, Jin
    Gao, Changxin
    Hu, Jing
    Sang, Nong
    PROCEEDINGS 3RD IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION ACPR 2015, 2015, : 21 - 25
  • [28] Learning Camera-Invariant Representation for Person Re-identification
    Qin, Shizheng
    Gu, Kangzheng
    Wang, Lecheng
    Qi, Lizhe
    Zhang, Wenqiang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 125 - 137
  • [29] Pose-Invariant Embedding for Deep Person Re-Identification
    Zheng, Liang
    Huang, Yujia
    Lu, Huchuan
    Yang, Yi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (09) : 4500 - 4509
  • [30] Learning Domain Invariant Representations for Generalizable Person Re-Identification
    Zhang, Yi-Fan
    Zhang, Zhang
    Li, Da
    Jia, Zhen
    Wang, Liang
    Tan, Tieniu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 509 - 523