Calibrated Domain-Invariant Learning for Highly Generalizable Large Scale Re-Identification

被引:0
|
作者
Yuan, Ye [1 ]
Chen, Wuyang [1 ]
Chen, Tianlong [1 ]
Yang, Yang [2 ]
Ren, Zhou [3 ]
Wang, Zhangyang [1 ]
Hua, Gang [3 ]
机构
[1] Texas A&M Univ, Dept Comp Sci & Engn, College Stn, TX 77843 USA
[2] Walmart Technol, Sunnyvale, CA USA
[3] Wormpex AI Res, Bellevue, WA USA
来源
2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2020年
关键词
D O I
10.1109/wacv45572.2020.9093521
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many real-world applications, such as city scale traffic monitoring and control, requires large scale re-identification. However, previous ReID methods often failed to address two limitations in existing ReID benchmarks, i.e., low spatiotemporal coverage and sample imbalance. Notwithstanding their demonstrated success in every single benchmark, they have difficulties in generalizing to unseen environments. As a result, these methods are less applicable in a large scale setting due to poor generalization. In seek for a highly generalizable large-scale ReID method, we present an adversarial domain-invariant feature learning framework (ADIN) that explicitly learns to separate identity-related features from challenging variations, where for the first time "free" annotations in ReID data such as video timestamp and camera index are utilized. Furthermore, we find that the imbalance of nuisance classes jeopardizes the adversarial training, and for mitigation we propose a calibrated adversarial loss that is attentive to nuisance distribution. Experiments on existing large-scale person/vehicle ReID datasets demonstrate that ADIN learns more robust and generalizable representations, as evidenced by its outstanding direct transfer performance across datasets, which is a criterion that can better measure the generalizability of large scale Re-ID methods.
引用
收藏
页码:3578 / 3587
页数:10
相关论文
共 50 条
  • [31] A Large Scale Benchmark of Person Re-Identification
    Yin, Qingze
    Ding, Guodong
    DRONES, 2024, 8 (07)
  • [32] Scale-invariant batch-adaptive residual learning for person re-identification
    Sikdar, Arindam
    Chowdhury, Ananda S.
    PATTERN RECOGNITION LETTERS, 2020, 129 : 279 - 286
  • [33] ATTENTIVE ADVERSARIAL LEARNING FOR DOMAIN-INVARIANT TRAINING
    Meng, Zhong
    Li, Jinyu
    Gong, Yifan
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6740 - 6744
  • [34] LEARNING DOMAIN-INVARIANT TRANSFORMATION FOR SPEAKER VERIFICATION
    Zhang, Hanyi
    Wang, Longbiao
    Lee, Kong Aik
    Liu, Meng
    Dang, Jianwu
    Chen, Hui
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7177 - 7181
  • [35] SCALE-INVARIANT SIAMESE NETWORK FOR PERSON RE-IDENTIFICATION
    Zhang, Yunzhou
    Shi, Weidong
    Liu, Shuangwei
    Bao, Jining
    Wei, Ying
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2436 - 2440
  • [36] DSIL-DDI: A Domain-Invariant Substructure Interaction Learning for Generalizable Drug-Drug Interaction Prediction
    Tang, Zhenchao
    Chen, Guanxing
    Yang, Hualin
    Zhong, Weihe
    Chen, Calvin Yu-Chian
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (08) : 10552 - 10560
  • [37] TAL: Two-stream Adaptive Learning for Generalizable Person Re-identification
    Yan, Yichao
    Li, Junjie
    Liao, Shengcai
    Qin, Jie
    MACHINE INTELLIGENCE RESEARCH, 2025, : 337 - 351
  • [38] Style-unaware meta-learning for generalizable person re-identification
    Shao, Jie
    Cai, Pengpeng
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (05)
  • [39] Large-Scale Person Re-Identification Based on Deep Hash Learning
    Ma, Xian-Qin
    Yu, Chong-Chong
    Chen, Xiu-Xin
    Zhou, Lan
    ENTROPY, 2019, 21 (05)
  • [40] Meta Clustering Learning for Large-scale Unsupervised Person Re-identification
    Jin, Xin
    He, Tianyu
    Shen, Xu
    Liu, Tongliang
    Wang, Xinchao
    Huang, Jianqiang
    Chen, Zhibo
    Hua, Xian-Sheng
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 2163 - 2172