Revisiting Dropout Regularization for Cross-Modality Person Re-Identification

被引:3
|
作者
Rachmadi, Reza Fuad [1 ,2 ]
Nugroho, Supeno Mardi Susiki [1 ]
Purnama, I. Ketut Eddy [1 ,2 ]
机构
[1] Inst Teknol Sepuluh Nopember, Dept Comp Engn, Surabaya 60111, Indonesia
[2] Univ Ctr Excellence Artificial Intelligence Healt, Surabaya 60111, Indonesia
关键词
Feature extraction; Deep learning; Object recognition; Cameras; Identification of persons; Convolutional neural networks; Biological neural networks; Spatially targeted dropout regularization; cross-modality data; person re-identification; convolutional neural network;
D O I
10.1109/ACCESS.2022.3208562
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper investigated targeted dropout regularization for improving convolutional neural network classifier performance on cross-modality person re-identification problems. The dropout regularization is carefully applied on specific layers (targeted dropout regularization) or regions of the features map (spatially targeted dropout regularization). The intuition behind the spatially targeted dropout regularization is that the importantness of feature map regions is not equally the same, and the specific object is more likely placed at the center. We experimented heavily with PKU-Sketch-ReID and multi-modality person re-identification datasets with SwinTransformer deep neural network architecture. Three different targeted dropout regularizations are used for the experiments, including block-wise dropout, horizontal block-wise dropout, and vertical-horizontal block-wise dropout. Experiments on three different sketch re-identification datasets show that the proposed spatially targeted dropout regularization can improve the performance of the deep neural network classifiers with the best rank-1 of 73.20% on the PKU-Sketch-ReID, 52.73% on the SYSU-MM01 dataset, and 72.74% on the RegDB dataset.
引用
收藏
页码:102195 / 102209
页数:15
相关论文
共 50 条
  • [1] Modality interactive attention for cross-modality person re-identification
    Zou, Zilin
    Chen, Ying
    IMAGE AND VISION COMPUTING, 2024, 148
  • [2] A Survey on Cross-Modality Heterogeneous Person Re-identification
    Sun R.
    Zhao Z.
    Yang Z.
    Gao J.
    Sun, Rui (sunrui@hfut.edu.cn), 1600, Science Press (33): : 1066 - 1082
  • [3] Self-attention Cross-modality Fusion Network for Cross-modality Person Re-identification
    Du P.
    Song Y.-H.
    Zhang X.-Y.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (06): : 1457 - 1468
  • [4] Hierarchical Feature Fusion for Cross-Modality Person Re-identification
    Fu, Wen
    Lim, Monghao
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2024, 38 (16)
  • [5] Channel decoupling network for cross-modality person re-identification
    Chen, Jingying
    Chen, Chang
    Tan, Lei
    Peng, Shixin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (09) : 14091 - 14105
  • [6] Dynamic feature weakening for cross-modality person re-identification*
    Lu, Jian
    Chen, Mengdie
    Wang, Hangying
    Pang, Feifei
    COMPUTERS & ELECTRICAL ENGINEERING, 2023, 109
  • [7] RGB-Infrared Cross-Modality Person Re-Identification
    Wu, Ancong
    Zheng, Wei-Shi
    Yu, Hong-Xing
    Gong, Shaogang
    Lai, Jianhuang
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5390 - 5399
  • [8] Distance based Training for Cross-Modality Person Re-Identification
    Tekeli, Nihat
    Can, Ahmet Burak
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 4540 - 4549
  • [9] Channel decoupling network for cross-modality person re-identification
    Jingying Chen
    Chang Chen
    Lei Tan
    Shixin Peng
    Multimedia Tools and Applications, 2023, 82 : 14091 - 14105
  • [10] Cross-Modality Person Re-Identification with Generative Adversarial Training
    Dai, Pingyang
    Ji, Rongrong
    Wang, Haibin
    Wu, Qiong
    Huang, Yuyu
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 677 - 683