Learning Memory-Augmented Unidirectional Metrics for Cross-modality Person Re-identification

被引:101
|
作者
Liu, Jialun [1 ,2 ]
Sun, Yifan [2 ]
Zhu, Feng [2 ]
Pei, Hongbin [3 ]
Yang, Yi [4 ]
Li, Wenhui [1 ]
机构
[1] Jilin Univ, Changchun, Peoples R China
[2] Baidu Res, Beijing, Peoples R China
[3] Xi An Jiao Tong Univ, Xian, Peoples R China
[4] Zhejiang Univ, Hangzhou, Peoples R China
关键词
D O I
10.1109/CVPR52688.2022.01876
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper tackles the cross-modality person re-identification (re-ID) problem by suppressing the modality discrepancy. In cross-modality re-ID, the query and gallery images are in different modalities. Given a training identity, the popular deep classification baseline shares the same proxy (i.e., a weight vector in the last classification layer) for two modalities. We find that it has considerable tolerance for the modality gap, because the shared proxy acts as an intermediate relay between two modalities. In response, we propose a Memory-Augmented Unidirectional Metric (MAUM) learning method consisting of two novel designs, i.e., unidirectional metrics, and memory-based augmentation. Specifically, MAUM first learns modality-specific proxies (MS-Proxies) independently under each modality. Afterward, MAUM uses the already-learned MS-Proxies as the static references for pulling close the features in the counterpart modality. These two unidirectional metrics (IR image to RGB proxy and RGB image to IR proxy) jointly alleviate the relay effect and benefit cross-modality association. The cross-modality association is further enhanced by storing the MS-Proxies into memory banks to increase the reference diversity. Importantly, we show that MAUM improves cross-modality re-ID under the modality-balanced setting and gains extra robustness against the modality-imbalance problem. Extensive experiments on SYSU-MMOI and RegDB datasets demonstrate the superiority of MAUM over the state-of-the-art. The code will be available.
引用
收藏
页码:19344 / 19353
页数:10
相关论文
共 50 条
  • [1] Dual Mutual Learning for Cross-Modality Person Re-Identification
    Zhang, Demao
    Zhang, Zhizhong
    Ju, Ying
    Wang, Cong
    Xie, Yuan
    Qu, Yanyun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (08) : 5361 - 5373
  • [2] Modality interactive attention for cross-modality person re-identification
    Zou, Zilin
    Chen, Ying
    IMAGE AND VISION COMPUTING, 2024, 148
  • [3] Efficient Shared Feature Learning for Cross-modality Person Re-identification
    Song, Wanru
    Wang, Xinyi
    Liu, Feng
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 100 - 105
  • [4] HPILN: a feature learning framework for cross-modality person re-identification
    Zhao, Yun-Bo
    Lin, Jian-Wu
    Xuan, Qi
    Xi, Xugang
    IET IMAGE PROCESSING, 2019, 13 (14) : 2897 - 2904
  • [5] Cross-modality person re-identification using hybrid mutual learning
    Zhang, Zhong
    Dong, Qing
    Wang, Sen
    Liu, Shuang
    Xiao, Baihua
    Durrani, Tariq S.
    IET COMPUTER VISION, 2023, 17 (01) : 1 - 12
  • [6] Deep feature learning with attributes for cross-modality person re-identification
    Zhang, Shikun
    Chen, Changhong
    Song, Wanru
    Gan, Zongliang
    JOURNAL OF ELECTRONIC IMAGING, 2020, 29 (03)
  • [7] A Survey on Cross-Modality Heterogeneous Person Re-identification
    Sun R.
    Zhao Z.
    Yang Z.
    Gao J.
    Sun, Rui (sunrui@hfut.edu.cn), 1600, Science Press (33): : 1066 - 1082
  • [8] Cross-modality person re-identification via modality-synergy alignment learning
    Lin, Yuju
    Wang, Banghai
    MACHINE VISION AND APPLICATIONS, 2024, 35 (06)
  • [9] Cross-Modality Person Re-identification with Memory-Based Contrastive Embedding
    Cheng, De
    Wang, Xiaolong
    Wang, Nannan
    Wang, Zhen
    Wang, Xiaoyu
    Gao, Xinbo
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 425 - 432
  • [10] Cross-modality person re-identification via multi-task learning
    Huang, Nianchang
    Liu, Kunlong
    Liu, Yang
    Zhang, Qiang
    Han, Jungong
    Pattern Recognition, 2022, 128