Cross-modality person re-identification using hybrid mutual learning

被引:4
|
作者
Zhang, Zhong [1 ]
Dong, Qing [1 ]
Wang, Sen [1 ]
Liu, Shuang [1 ]
Xiao, Baihua [2 ]
Durrani, Tariq S. [3 ]
机构
[1] Tianjin Normal Univ, Tianjin Key Lab Wireless Mobile Commun & Power Tr, Tianjin, Peoples R China
[2] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing, Peoples R China
[3] Univ Strathclyde, Elect & Elect Engn, Glasgow, Lanark, Scotland
基金
中国国家自然科学基金;
关键词
RANKING;
D O I
10.1049/cvi2.12123
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modality person re-identification (Re-ID) aims to retrieve a query identity from red, green, blue (RGB) images or infrared (IR) images. Many approaches have been proposed to reduce the distribution gap between RGB modality and IR modality. However, they ignore the valuable collaborative relationship between RGB modality and IR modality. Hybrid Mutual Learning (HML) for cross-modality person Re-ID is proposed, which builds the collaborative relationship by using mutual learning from the aspects of local features and triplet relation. Specifically, HML contains local-mean mutual learning and triplet mutual learning where they focus on transferring local representational knowledge and structural geometry knowledge so as to reduce the gap between RGB modality and IR modality. Furthermore, Hierarchical Attention Aggregation is proposed to fuse local feature maps and local feature vectors to enrich the information of the classifier input. Extensive experiments on two commonly used data sets, that is, SYSU-MM01 and RegDB verify the effectiveness of the proposed method.
引用
收藏
页码:1 / 12
页数:12
相关论文
共 50 条
  • [1] Dual Mutual Learning for Cross-Modality Person Re-Identification
    Zhang, Demao
    Zhang, Zhizhong
    Ju, Ying
    Wang, Cong
    Xie, Yuan
    Qu, Yanyun
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (08) : 5361 - 5373
  • [2] Fine-Grained Cross-Modality Person Re-Identification Based on Mutual Prediction Learning
    Li Shuang
    Li Huafeng
    Li Fan
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (10)
  • [3] Modality interactive attention for cross-modality person re-identification
    Zou, Zilin
    Chen, Ying
    [J]. IMAGE AND VISION COMPUTING, 2024, 148
  • [4] Efficient Shared Feature Learning for Cross-modality Person Re-identification
    Song, Wanru
    Wang, Xinyi
    Liu, Feng
    [J]. 2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 100 - 105
  • [5] HPILN: a feature learning framework for cross-modality person re-identification
    Zhao, Yun-Bo
    Lin, Jian-Wu
    Xuan, Qi
    Xi, Xugang
    [J]. IET IMAGE PROCESSING, 2019, 13 (14) : 2897 - 2904
  • [6] Deep feature learning with attributes for cross-modality person re-identification
    Zhang, Shikun
    Chen, Changhong
    Song, Wanru
    Gan, Zongliang
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2020, 29 (03)
  • [7] Cross-modality person re-identification via modality-synergy alignment learning
    Lin, Yuju
    Wang, Banghai
    [J]. MACHINE VISION AND APPLICATIONS, 2024, 35 (06)
  • [8] Cross-modality person re-identification algorithm using symmetric network
    Zhang, Yan
    Xiang, Xu
    Tang, Jun
    Wang, Nian
    Qu, Lei
    [J]. Guofang Keji Daxue Xuebao/Journal of National University of Defense Technology, 2022, 44 (01): : 122 - 128
  • [9] Cross-modality person re-identification via multi-task learning
    Huang, Nianchang
    Liu, Kunlong
    Liu, Yang
    Zhang, Qiang
    Han, Jungong
    [J]. PATTERN RECOGNITION, 2022, 128
  • [10] Cross-modality person re-identification via multi-task learning
    Huang, Nianchang
    Liu, Kunlong
    Liu, Yang
    Zhang, Qiang
    Han, Jungong
    [J]. Pattern Recognition, 2022, 128