Cross-modality person re-identification using hybrid mutual learning

被引:5
|
作者
Zhang, Zhong [1 ]
Dong, Qing [1 ]
Wang, Sen [1 ]
Liu, Shuang [1 ]
Xiao, Baihua [2 ]
Durrani, Tariq S. [3 ]
机构
[1] Tianjin Normal Univ, Tianjin Key Lab Wireless Mobile Commun & Power Tr, Tianjin, Peoples R China
[2] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing, Peoples R China
[3] Univ Strathclyde, Elect & Elect Engn, Glasgow, Lanark, Scotland
基金
中国国家自然科学基金;
关键词
RANKING;
D O I
10.1049/cvi2.12123
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modality person re-identification (Re-ID) aims to retrieve a query identity from red, green, blue (RGB) images or infrared (IR) images. Many approaches have been proposed to reduce the distribution gap between RGB modality and IR modality. However, they ignore the valuable collaborative relationship between RGB modality and IR modality. Hybrid Mutual Learning (HML) for cross-modality person Re-ID is proposed, which builds the collaborative relationship by using mutual learning from the aspects of local features and triplet relation. Specifically, HML contains local-mean mutual learning and triplet mutual learning where they focus on transferring local representational knowledge and structural geometry knowledge so as to reduce the gap between RGB modality and IR modality. Furthermore, Hierarchical Attention Aggregation is proposed to fuse local feature maps and local feature vectors to enrich the information of the classifier input. Extensive experiments on two commonly used data sets, that is, SYSU-MM01 and RegDB verify the effectiveness of the proposed method.
引用
收藏
页码:1 / 12
页数:12
相关论文
共 50 条
  • [31] Leaning compact and representative features for cross-modality person re-identification
    Guangwei Gao
    Hao Shao
    Fei Wu
    Meng Yang
    Yi Yu
    World Wide Web, 2022, 25 : 1649 - 1666
  • [32] Global and Part Feature Fusion for Cross-Modality Person Re-Identification
    Wang, Xianju
    Cordova, Ronald S.
    IEEE ACCESS, 2022, 10 : 122038 - 122046
  • [33] Unbiased feature enhancement framework for cross-modality person re-identification
    Yuan, Bowen
    Chen, Bairu
    Tan, Zhiyi
    Shao, Xi
    Bao, Bing-Kun
    MULTIMEDIA SYSTEMS, 2022, 28 (03) : 749 - 759
  • [34] Unbiased feature enhancement framework for cross-modality person re-identification
    Bowen Yuan
    Bairu Chen
    Zhiyi Tan
    Xi Shao
    Bing-Kun Bao
    Multimedia Systems, 2022, 28 : 749 - 759
  • [35] Cross-Modality Transformer for Visible-Infrared Person Re-Identification
    Jiang, Kongzhu
    Zhang, Tianzhu
    Liu, Xiang
    Qian, Bingqiao
    Zhang, Yongdong
    Wu, Feng
    COMPUTER VISION - ECCV 2022, PT XIV, 2022, 13674 : 480 - 496
  • [36] Co-segmentation assisted cross-modality person re-identification
    Huang, Nianchang
    Xing, Baichao
    Zhang, Qiang
    Han, Jungong
    Huang, Jin
    INFORMATION FUSION, 2024, 104
  • [37] Triplet interactive attention network for cross-modality person re-identification
    Zhang, Chenrui
    Chen, Ping
    Lei, Tao
    Meng, Hongying
    PATTERN RECOGNITION LETTERS, 2021, 152 : 202 - 209
  • [38] Leaning compact and representative features for cross-modality person re-identification
    Gao, Guangwei
    Shao, Hao
    Wu, Fei
    Yang, Meng
    Yu, Yi
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2022, 25 (04): : 1649 - 1666
  • [39] Cross-modality person re-identification utilizing the hybrid two-stream neural networks
    Cheng D.
    Hao Y.
    Zhou J.
    Wang N.
    Gao X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2021, 48 (05): : 190 - 200
  • [40] Cross-Modality Transformer With Modality Mining for Visible-Infrared Person Re-Identification
    Liang, Tengfei
    Jin, Yi
    Liu, Wu
    Li, Yidong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8432 - 8444