Pairwise Generalization Network for Cross-Domain Image Recognition

被引:0
|
作者
Y. B. Liu
T. T. Han
Z. Gao
机构
[1] Tianjin University of Technology,Tianjin Key Laboratory of Intelligence Computing and Novel Software Technology and Key Laboratory of Computer Vision and System, Ministry of Education
[2] Shandong Artifical Intelligence Institute,Qilu University of Technology (Shandong Academy of Sciences), Shandong Computer Science Center (National Supercomputer Center in Jinan)
来源
Neural Processing Letters | 2020年 / 52卷
关键词
Cross-domain; Image recognition; Pairwise;
D O I
暂无
中图分类号
学科分类号
摘要
In recent years, convolutional neural networks have received increasing attention from the computer vision and machine learning communities. Due to the differences in the distribution, tone and brightness of the training domain and test domain, researchers begin to focus on cross-domain image recognition. In this paper, we propose a Pairwise Generalization Network (PGN) for addressing the problem of cross-domain image recognition where Instance Normalization and Batch Normalization are added to enhance their abilities in the original domain and to expand to the new domain. Meanwhile, the Siamese architecture is utilized in the PGN to learn an embedding subspace that is discriminative, and map positive sample pairs aligned and negative sample pairs separated, which can work well even with only few labeled target data samples. We also add residual architecture and MMD loss for the PGN model to further improve its performance. Extensive experiments on two different public benchmarks show that our PGN solution significantly outperforms the state-of-the-art methods.
引用
收藏
页码:1023 / 1041
页数:18
相关论文
共 50 条
  • [41] Gait recognition with cross-domain transfer networks
    Tong, Suibing
    Fu, Yuzhuo
    Ling, Hefei
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2019, 93 : 40 - 47
  • [42] Cross-domain repetition priming in person recognition
    Burton, AM
    Kelly, SW
    Bruce, V
    [J]. QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY SECTION A-HUMAN EXPERIMENTAL PSYCHOLOGY, 1998, 51 (03): : 515 - 529
  • [43] NLP Cross-Domain Recognition of Retail Products
    Petterson, Tobias
    Oucheikh, Rachid
    Lofstrom, Tuwe
    [J]. PROCEEDINGS OF 2022 7TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING TECHNOLOGIES, ICMLT 2022, 2022, : 237 - 243
  • [44] Hybrid cross-domain joint network for sketch-based image retrieval
    Li Q.
    Zhou Y.
    Li C.
    Peng Y.
    Liang X.
    [J]. Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2022, 54 (05): : 64 - 73
  • [45] H-Net: Neural Network for Cross-domain Image Patch Matching
    Liu, Weiquan
    Shen, Xuelun
    Wang, Cheng
    Zhang, Zhihong
    Wen, Chenglu
    Li, Jonathan
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 856 - 863
  • [46] Sketch-Based Cross-Domain Image Retrieval Via Heterogeneous Network
    Zhang, Hao
    Zhang, Chuang
    Wu, Ming
    [J]. 2017 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2017,
  • [47] Single Cross-domain Semantic Guidance Network for Multimodal Unsupervised Image Translation
    Lan, Jiaying
    Cheng, Lianglun
    Huang, Guoheng
    Pun, Chi-Man
    Yuan, Xiaochen
    Lai, Shangyu
    Liu, HongRui
    Ling, Wing-Kuen
    [J]. MULTIMEDIA MODELING, MMM 2023, PT I, 2023, 13833 : 165 - 177
  • [48] Cross-domain Image Retrieval with a Dual Attribute-aware Ranking Network
    Huang, Junshi
    Feris, Rogerio
    Chen, Qiang
    Yan, Shuicheng
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1062 - 1070
  • [49] Cross-domain heterogeneous residual network for single image super-resolution
    Ji, Li
    Zhu, Qinghui
    Zhang, Yongqin
    Yin, Juanjuan
    Wei, Ruyi
    Xiao, Jinsheng
    Xiao, Deqiang
    Zhao, Guoying
    [J]. NEURAL NETWORKS, 2022, 149 : 84 - 94
  • [50] Domain Adaptive Sampling for Cross-Domain Point Cloud Recognition
    Wang, Zicheng
    Li, Wen
    Xu, Dong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (12) : 7604 - 7615