RGB-IR cross-modality person ReID based on teacher-student GAN model

被引:40
|
作者
Zhang, Ziyue [1 ]
Jiang, Shuai [1 ]
Huang, Congzhentao [1 ]
Li, Yang [1 ]
Da Xu, Richard Yi [1 ]
机构
[1] Univ Technol Sydney, 15 Broadway, Ultimo, NSW 2007, Australia
关键词
Person ReID; Cross-modality; Teacher-student model; REIDENTIFICATION;
D O I
10.1016/j.patrec.2021.07.006
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
RGB-Infrared (RGB-IR) person re-identification (ReID) is a technology where the system can automatically identify the same person appearing at different parts of a video when light is unavailable. The critical challenge of this task is the cross-modality gap of features under different modalities. To solve this challenge, we proposed a Teacher-Student GAN model (TS-GAN) to adopt different domains and guide the ReID backbone. (1) In order to get corresponding RGB-IR image pairs, the RGB-IR Generative Adversarial Network (GAN) was used to generate IR images. (2) To kick-start the training of identities, a ReID Teacher module was trained under IR modality person images, which is then used to guide its Student counterpart in training. (3) Likewise, to better adapt different domain features and enhance model ReID performance, three Teacher-Student loss functions were used. Unlike other GAN based models, the proposed model only needs the backbone module at the test stage, making it more efficient and resource-saving. To showcase our model's capability, we did extensive experiments on the newly-released SYSU-MM01 and RegDB RGB-IR Re-ID benchmark and achieved superior performance to the state-of-the-art with 47.4% mAP and 69.4% mAP respectively. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:155 / 161
页数:7
相关论文
共 50 条
  • [31] Cross-Modality Person Re-Identification Algorithm Based on Two-Branch Network
    Song, Jianfeng
    Yang, Jin
    Zhang, Chenyang
    Xie, Kun
    ELECTRONICS, 2023, 12 (14)
  • [32] Cross-modality person re-identification via channel-based partition network
    Liu, Jiachang
    Song, Wanru
    Chen, Changhong
    Liu, Feng
    APPLIED INTELLIGENCE, 2022, 52 (03) : 2423 - 2435
  • [33] Fully Convolutional Transformer-Based GAN for Cross-Modality CT to PET Image Synthesis
    Li, Yuemei
    Zheng, Qiang
    Wang, Yi
    Zhou, Yongkang
    Zhang, Yang
    Song, Yipeng
    Jiang, Wei
    COMPUTATIONAL MATHEMATICS MODELING IN CANCER ANALYSIS, CMMCA 2023, 2023, 14243 : 101 - 109
  • [34] Design and Benchmarking of a Multimodality Sensor for Robotic Manipulation With GAN-Based Cross-Modality Interpretation
    Zhang, Dandan
    Fan, Wen
    Lin, Jialin
    Li, Haoran
    Cong, Qingzheng
    Liu, Weiru
    Lepora, Nathan F.
    Luo, Shan
    IEEE TRANSACTIONS ON ROBOTICS, 2025, 41 : 1278 - 1295
  • [35] DSG-GAN:A dual-stage-generator-based GAN for cross-modality synthesis from PET to CT
    Wang H.
    Wang X.
    Liu F.
    Zhang G.
    Zhang G.
    Zhang Q.
    Lang M.L.
    Computers in Biology and Medicine, 2024, 172
  • [36] Transformer-based cross-modality interaction guidance network for RGB-T salient object detection
    Luo, Jincheng
    Li, Yongjun
    Li, Bo
    Zhang, Xinru
    Li, Chaoyue
    Chenjin, Zhimin
    He, Jingyi
    Liang, Yifei
    NEUROCOMPUTING, 2024, 600
  • [37] Pre-training Model Based on Parallel Cross-Modality Fusion Layer
    Li, Xuewei
    Han, Dezhi
    Chang, Chin-Chen
    PLOS ONE, 2022, 17 (02):
  • [38] Cross-modality person re-identification based on semantic coupling and identity-consistence constraint
    Hou, Chun-Ping
    Yang, Qing-Yuan
    Huang, Mei-Yan
    Wang, Zhi-Peng
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2022, 52 (12): : 2954 - 2963
  • [39] Fine-Grained Cross-Modality Person Re-Identification Based on Mutual Prediction Learning
    Li Shuang
    Li Huafeng
    Li Fan
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (10)
  • [40] Cross-Modality Person Re-identification Based on Locally Heterogeneous Polymerization Graph Convolutional Network
    Sun R.
    Zhang L.
    Yu Y.-H.
    Zhang X.-D.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2023, 51 (04): : 810 - 825