A Base-Derivative Framework for Cross-Modality RGB-Infrared Person Re-Identification

被引:4
|
作者
Liu, Hong [1 ]
Miao, Ziling [1 ]
Yang, Bing [1 ]
Ding, Runwei [1 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Key Lab Machine Percept, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICPR48806.2021.9413029
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modality RGB-infrared (RGB-IR) person re-identification (Re-ID) is a challenging research topic due to the heterogeneity of RGB and infrared images. In this paper, we aim to find some auxiliary modalities, which are homologous with the visible or infrared modalities, to help reduce the modality discrepancy caused by heterogeneous images. Accordingly, a new base-derivative framework is proposed, where base refers to the original visible and infrared modalities, and derivative refers to the two auxiliary modalities that are derived from base. In the proposed framework, the double-modality cross-modal learning problem is reformulated as a four-modality one. After that, the images of all the base and derivative modalities are fed into the feature learning network. With the doubled input images, the learned person features become more discriminative. Furthermore, the proposed framework is optimized by the enhanced intra- and cross-modality constraints with the assistance of two derivative modalities. Experimental results on two publicly available datasets SYSU-MMO1 and RegDB show that the proposed method outperforms the other state-of-the-art methods. For instance, we achieve a gain of over 13% in terms of both Rank-1 and mAP on RegDB dataset.
引用
收藏
页码:7640 / 7646
页数:7
相关论文
共 50 条
  • [21] HPILN: a feature learning framework for cross-modality person re-identification
    Zhao, Yun-Bo
    Lin, Jian-Wu
    Xuan, Qi
    Xi, Xugang
    IET IMAGE PROCESSING, 2019, 13 (14) : 2897 - 2904
  • [22] Modality interactive attention for cross-modality person re-identification
    Zou, Zilin
    Chen, Ying
    IMAGE AND VISION COMPUTING, 2024, 148
  • [23] Cross-Modality Transformer With Modality Mining for Visible-Infrared Person Re-Identification
    Liang, Tengfei
    Jin, Yi
    Liu, Wu
    Li, Yidong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8432 - 8444
  • [24] A Survey on Cross-Modality Heterogeneous Person Re-identification
    Sun R.
    Zhao Z.
    Yang Z.
    Gao J.
    Sun, Rui (sunrui@hfut.edu.cn), 1600, Science Press (33): : 1066 - 1082
  • [25] Discover Cross-Modality Nuances for Visible-Infrared Person Re-Identification
    Wu, Qiong
    Dai, Pingyang
    Chen, Jie
    Lin, Chia-Wen
    Wu, Yongjian
    Huang, Feiyue
    Zhong, Bineng
    Ji, Rongrong
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 4328 - 4337
  • [26] A cross-modality person re-identification method for visible-infrared images
    Sun Y.
    Wang R.
    Zhang Q.
    Lin R.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (06): : 2018 - 2025
  • [27] Cross-modality consistency learning for visible-infrared person re-identification
    Shao, Jie
    Tang, Lei
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [28] Self-attention Cross-modality Fusion Network for Cross-modality Person Re-identification
    Du P.
    Song Y.-H.
    Zhang X.-Y.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (06): : 1457 - 1468
  • [29] RGB-INFRARED PERSON RE-IDENTIFICATION VIA MULTI-MODALITY RELATION AGGREGATION AND GRAPH CONVOLUTION NETWORK
    Sun, Jiangshan
    Zhang, Taiping
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1174 - 1178
  • [30] Homogeneous-to-Heterogeneous: Unsupervised Learning for RGB-Infrared Person Re-Identification
    Liang, Wenqi
    Wang, Guangcong
    Lai, Jianhuang
    Xie, Xiaohua
    IEEE Transactions on Image Processing, 2021, 30 : 6392 - 6407