Cross-modality person re-identification via multi-task learning

被引:0
|
作者
Huang, Nianchang [1 ]
Liu, Kunlong [1 ]
Liu, Yang [2 ]
Zhang, Qiang [1 ]
Han, Jungong [3 ]
机构
[1] Center for Complex Systems, School of Mechano-Electronic Engineering, Xidian University, Shaanxi, Xi'an,710071, China
[2] the State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an,710071, China
[3] Computer Science Department, Aberystwyth University, UK, SY23 3FL, United Kingdom
基金
中国国家自然科学基金;
关键词
Arts computing - Learning systems - Semantics;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Despite its promising preliminary results, existing cross-modality Visible-Infrared Person Re-IDentification (VI-PReID) models incorporating semantic (person) masks simply use these person masks as selection maps to separate person features from background regions. Such models do not dedicate to extracting more modality-invariant person body features in the VI-PReID network itself, thus leading to suboptimal results in VI-PReID. Differently, we aim to better capture person body information in the VI-PReID network itself for VI-PReID by exploiting the inner relations between person mask prediction and VI-PReID. To this end, a novel multi-task learning model is presented in this paper, where person body features obtained by person mask prediction potentially facilitate the extraction of discriminative modality-shared person body information for VI-PReID. On top of that, considering the task difference between person mask prediction and VI-PReID, we propose a novel task translation sub-network to transfer discriminative person body information, extracted by person mask prediction, into VI-PReID. Doing so enables our model to better exploit discriminative and modality-invariant person body information. Thanks to more discriminative modality-shared features, our method outperforms previous state-of-the-arts by a significant margin on several benchmark datasets. Our intriguing findings validate the effectiveness of extracting discriminative person body features for the VI-PReID task. © 2022 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [41] Visible-Infrared Person Re-Identification via Cross-Modality Interaction Transformer
    Feng, Yujian
    Yu, Jian
    Chen, Feng
    Ji, Yimu
    Wu, Fei
    Liu, Shangdon
    Jing, Xiao-Yuan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 7647 - 7659
  • [42] Cross-modality person re-identification via channel-based partition network
    Liu, Jiachang
    Song, Wanru
    Chen, Changhong
    Liu, Feng
    APPLIED INTELLIGENCE, 2022, 52 (03) : 2423 - 2435
  • [43] Cross-Modality Person Re-Identification via Local Paired Graph Attention Network
    Zhou, Jianglin
    Dong, Qing
    Zhang, Zhong
    Liu, Shuang
    Durrani, Tariq S.
    SENSORS, 2023, 23 (08)
  • [44] Cross-Modality Semantic Consistency Learning for Visible-Infrared Person Re-Identification
    Liu, Min
    Zhang, Zhu
    Bian, Yuan
    Wang, Xueping
    Sun, Yeqing
    Zhang, Baida
    Wang, Yaonan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 568 - 580
  • [45] Learning Memory-Augmented Unidirectional Metrics for Cross-modality Person Re-identification
    Liu, Jialun
    Sun, Yifan
    Zhu, Feng
    Pei, Hongbin
    Yang, Yi
    Li, Wenhui
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 19344 - 19353
  • [46] Not All Pixels Are Matched: Dense Contrastive Learning for Cross-Modality Person Re-Identification
    Sun, Hanzhe
    Liu, Jun
    Zhang, Zhizhong
    Wang, Chengjie
    Qu, Yanyun
    Xie, Yuan
    Ma, Lizhuang
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5333 - 5341
  • [47] MRLReID: Unconstrained Cross-Resolution Person Re-Identification With Multi-Task Resolution Learning
    Peng, Chunlei
    Wang, Bo
    Liu, Decheng
    Wang, Nannan
    Hu, Ruimin
    Gao, Xinbo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 10050 - 10062
  • [48] A Multi-Task Deep Network for Person Re-Identification
    Chen, Weihua
    Chen, Xiaotang
    Zhang, Jianguo
    Huang, Kaiqi
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3988 - 3994
  • [49] Cross-modality person re-identification based on intermediate modal generation
    Lu, Jian
    Zhang, Shasha
    Chen, Mengdie
    Chen, Xiaogai
    Zhang, Kaibing
    OPTICS AND LASERS IN ENGINEERING, 2024, 177
  • [50] Leaning compact and representative features for cross-modality person re-identification
    Guangwei Gao
    Hao Shao
    Fei Wu
    Meng Yang
    Yi Yu
    World Wide Web, 2022, 25 : 1649 - 1666