A Base-Derivative Framework for Cross-Modality RGB-Infrared Person Re-Identification

被引:4
|
作者
Liu, Hong [1 ]
Miao, Ziling [1 ]
Yang, Bing [1 ]
Ding, Runwei [1 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Key Lab Machine Percept, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICPR48806.2021.9413029
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modality RGB-infrared (RGB-IR) person re-identification (Re-ID) is a challenging research topic due to the heterogeneity of RGB and infrared images. In this paper, we aim to find some auxiliary modalities, which are homologous with the visible or infrared modalities, to help reduce the modality discrepancy caused by heterogeneous images. Accordingly, a new base-derivative framework is proposed, where base refers to the original visible and infrared modalities, and derivative refers to the two auxiliary modalities that are derived from base. In the proposed framework, the double-modality cross-modal learning problem is reformulated as a four-modality one. After that, the images of all the base and derivative modalities are fed into the feature learning network. With the doubled input images, the learned person features become more discriminative. Furthermore, the proposed framework is optimized by the enhanced intra- and cross-modality constraints with the assistance of two derivative modalities. Experimental results on two publicly available datasets SYSU-MMO1 and RegDB show that the proposed method outperforms the other state-of-the-art methods. For instance, we achieve a gain of over 13% in terms of both Rank-1 and mAP on RegDB dataset.
引用
收藏
页码:7640 / 7646
页数:7
相关论文
共 50 条
  • [41] Counterfactual attention alignment for visible-infrared cross-modality person re-identification
    Sun, Zongzhe
    Zhao, Feng
    PATTERN RECOGNITION LETTERS, 2023, 168 : 79 - 85
  • [42] Dual Mutual Learning for Cross-Modality Person Re-Identification
    Zhang, Demao
    Zhang, Zhizhong
    Ju, Ying
    Wang, Cong
    Xie, Yuan
    Qu, Yanyun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (08) : 5361 - 5373
  • [43] Cross-Modality Semantic Consistency Learning for Visible-Infrared Person Re-Identification
    Liu, Min
    Zhang, Zhu
    Bian, Yuan
    Wang, Xueping
    Sun, Yeqing
    Zhang, Baida
    Wang, Yaonan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 568 - 580
  • [44] Cross-modality nearest neighbor loss for visible-infrared person re-identification
    Zhao S.
    Qi A.
    Gao Y.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (02): : 433 - 441
  • [45] Cross-Modal Cross-Domain Dual Alignment Network for RGB-Infrared Person Re-Identification
    Fu, Xiaowei
    Huang, Fuxiang
    Zhou, Yuhang
    Ma, Huimin
    Xu, Xin
    Zhang, Lei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) : 6874 - 6887
  • [46] Homogeneous-to-Heterogeneous: Unsupervised Learning for RGB-Infrared Person Re-Identification
    Liang, Wenqi
    Wang, Guangcong
    Lai, Jianhuang
    Xie, Xiaohua
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6392 - 6407
  • [47] Cross-Modality Person Re-Identification via Modality Confusion and Center Aggregation
    Hao, Xin
    Zhao, Sanyuan
    Ye, Mang
    Shen, Jianbing
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16383 - 16392
  • [48] An efficient framework for visible-infrared cross modality person re-identification
    Basaran, Emrah
    Gokmen, Muhittin
    Kamasak, Mustafa E.
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 87
  • [49] Cross-Modality Hierarchical Clustering and Refinement for Unsupervised Visible-Infrared Person Re-Identification
    Pang, Zhiqi
    Wang, Chunyu
    Zhao, Lingling
    Liu, Yang
    Sharma, Gaurav
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (04) : 2706 - 2718
  • [50] Knowledge self-distillation for visible-infrared cross-modality person re-identification
    Yu Zhou
    Rui Li
    Yanjing Sun
    Kaiwen Dong
    Song Li
    Applied Intelligence, 2022, 52 : 10617 - 10631