A Base-Derivative Framework for Cross-Modality RGB-Infrared Person Re-Identification

被引:4
|
作者
Liu, Hong [1 ]
Miao, Ziling [1 ]
Yang, Bing [1 ]
Ding, Runwei [1 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Key Lab Machine Percept, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICPR48806.2021.9413029
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modality RGB-infrared (RGB-IR) person re-identification (Re-ID) is a challenging research topic due to the heterogeneity of RGB and infrared images. In this paper, we aim to find some auxiliary modalities, which are homologous with the visible or infrared modalities, to help reduce the modality discrepancy caused by heterogeneous images. Accordingly, a new base-derivative framework is proposed, where base refers to the original visible and infrared modalities, and derivative refers to the two auxiliary modalities that are derived from base. In the proposed framework, the double-modality cross-modal learning problem is reformulated as a four-modality one. After that, the images of all the base and derivative modalities are fed into the feature learning network. With the doubled input images, the learned person features become more discriminative. Furthermore, the proposed framework is optimized by the enhanced intra- and cross-modality constraints with the assistance of two derivative modalities. Experimental results on two publicly available datasets SYSU-MMO1 and RegDB show that the proposed method outperforms the other state-of-the-art methods. For instance, we achieve a gain of over 13% in terms of both Rank-1 and mAP on RegDB dataset.
引用
收藏
页码:7640 / 7646
页数:7
相关论文
共 50 条
  • [31] Hierarchical Feature Fusion for Cross-Modality Person Re-identification
    Fu, Wen
    Lim, Monghao
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2024, 38 (16)
  • [32] Proxy-Based Embedding Alignment for RGB-Infrared Person Re-Identification
    Dou, Zhaopeng
    Sun, Yifan
    Li, Yali
    Wang, Shengjin
    TSINGHUA SCIENCE AND TECHNOLOGY, 2025, 30 (03): : 1112 - 1124
  • [33] Channel decoupling network for cross-modality person re-identification
    Chen, Jingying
    Chen, Chang
    Tan, Lei
    Peng, Shixin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (09) : 14091 - 14105
  • [34] Dynamic feature weakening for cross-modality person re-identification*
    Lu, Jian
    Chen, Mengdie
    Wang, Hangying
    Pang, Feifei
    COMPUTERS & ELECTRICAL ENGINEERING, 2023, 109
  • [35] Two-way constraint network for RGB-Infrared person re-identification
    Zeng, Haitang
    Hu, Weipeng
    Chen, Dihu
    Hu, Haifeng
    ELECTRONICS LETTERS, 2021, 57 (17) : 653 - 655
  • [36] Distance based Training for Cross-Modality Person Re-Identification
    Tekeli, Nihat
    Can, Ahmet Burak
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 4540 - 4549
  • [37] Revisiting Dropout Regularization for Cross-Modality Person Re-Identification
    Rachmadi, Reza Fuad
    Nugroho, Supeno Mardi Susiki
    Purnama, I. Ketut Eddy
    IEEE ACCESS, 2022, 10 : 102195 - 102209
  • [38] Channel decoupling network for cross-modality person re-identification
    Jingying Chen
    Chang Chen
    Lei Tan
    Shixin Peng
    Multimedia Tools and Applications, 2023, 82 : 14091 - 14105
  • [39] Cross-Modality Person Re-Identification with Generative Adversarial Training
    Dai, Pingyang
    Ji, Rongrong
    Wang, Haibin
    Wu, Qiong
    Huang, Yuyu
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 677 - 683
  • [40] Visible-Infrared Person Re-Identification via Cross-Modality Interaction Transformer
    Feng, Yujian
    Yu, Jian
    Chen, Feng
    Ji, Yimu
    Wu, Fei
    Liu, Shangdon
    Jing, Xiao-Yuan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 7647 - 7659