Deep Cross-Modality Alignment for Multi-Shot Person Re-IDentification

被引:6
|
作者
Song, Zhichao [1 ]
Ni, Bingbing [1 ]
Yan, Yichao [1 ]
Ren, Zhe [1 ]
Xu, Yi [1 ]
Yang, Xiaokang [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
关键词
person Re-ID; cross-modality alignment network; knowledge transferring; REPRESENTATION; RECOGNITION;
D O I
10.1145/3123266.3123324
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Multi-shot person Re-IDentification (Re-ID) has recently received more research attention as its problem setting is more realistic compared to single-shot Re-ID in terms of application. While many large-scale single-shot Re-ID human image datasets have been released, most existing multi-shot Re-ID video sequence datasets contain only a few (i.e., several hundreds) human instances, which hinders further improvement of multi-shot Re-ID performance. To this end, we propose a deep cross-modality alignment network, which jointly explores both human sequence pairs and image pairs to facilitate training better multi-shot human Re-ID models, i.e., via transferring knowledge from image data to sequence data. To mitigate modality-to-modality mismatch issue, the proposed network is equipped with an image-to-sequence adaption module called crass-modality alignment sub-network, which successfully maps each human image into a pseudo human sequence to facilitate knowledge transferring and joint training. Extensive experimental results on several multi-shot person Re-ID benchmarks demonstrate great performance gain brought up by the proposed network.
引用
收藏
页码:645 / 653
页数:9
相关论文
共 50 条
  • [41] Cross-Modality Transformer for Visible-Infrared Person Re-Identification
    Jiang, Kongzhu
    Zhang, Tianzhu
    Liu, Xiang
    Qian, Bingqiao
    Zhang, Yongdong
    Wu, Feng
    [J]. COMPUTER VISION - ECCV 2022, PT XIV, 2022, 13674 : 480 - 496
  • [42] Cross-modality person re-identification using hybrid mutual learning
    Zhang, Zhong
    Dong, Qing
    Wang, Sen
    Liu, Shuang
    Xiao, Baihua
    Durrani, Tariq S.
    [J]. IET COMPUTER VISION, 2023, 17 (01) : 1 - 12
  • [43] Triplet interactive attention network for cross-modality person re-identification
    Zhang, Chenrui
    Chen, Ping
    Lei, Tao
    Meng, Hongying
    [J]. PATTERN RECOGNITION LETTERS, 2021, 152 : 202 - 209
  • [44] Leaning compact and representative features for cross-modality person re-identification
    Gao, Guangwei
    Shao, Hao
    Wu, Fei
    Yang, Meng
    Yu, Yi
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2022, 25 (04): : 1649 - 1666
  • [45] Deep learning for visible-infrared cross-modality person re-identification: A comprehensive review
    Huang, Nianchang
    Liu, Jianan
    Miao, Yunqi
    Zhang, Qiang
    Han, Jungong
    [J]. INFORMATION FUSION, 2023, 91 : 396 - 411
  • [46] RGB-Infrared Cross-Modality Person Re-Identification via Joint Pixel and Feature Alignment
    Wang, Guan'an
    Zhang, Tianzhu
    Cheng, Jian
    Liu, Si
    Yang, Yang
    Hou, Zengguang
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3622 - 3631
  • [47] Cross-Modality Transformer With Modality Mining for Visible-Infrared Person Re-Identification
    Liang, Tengfei
    Jin, Yi
    Liu, Wu
    Li, Yidong
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8432 - 8444
  • [48] Parameter sharing and multi-granularity feature learning for cross-modality person re-identification
    Sixian Chan
    Feng Du
    Tinglong Tang
    Guodao Zhang
    Xiaoliang Jiang
    Qiu Guan
    [J]. Complex & Intelligent Systems, 2024, 10 : 949 - 962
  • [49] MSIF: multi-spectrum image fusion method for cross-modality person re-identification
    Qingshan Chen
    Zhenzhen Quan
    Yifan Zheng
    Yujun Li
    Zhi Liu
    Mikhail G. Mozerov
    [J]. International Journal of Machine Learning and Cybernetics, 2024, 15 : 647 - 665
  • [50] MSIF: multi-spectrum image fusion method for cross-modality person re-identification
    Chen, Qingshan
    Quan, Zhenzhen
    Zheng, Yifan
    Li, Yujun
    Liu, Zhi
    Mozerov, Mikhail G.
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (02) : 647 - 665