Deep CockTail NetworksA Universal Framework for Visual Multi-source Domain Adaptation

被引:0
|
作者
Ziliang Chen
Pengxu Wei
Jingyu Zhuang
Guanbin Li
Liang Lin
机构
[1] Sun Yat-sen University,School of Computer Science and Engineering
[2] Carnegie Mellon University,Machine Learning Department
来源
关键词
Multi-source domain adaptation; Cross-domain visual recognition; Domain shift; Category shift; Open-set domain adaptation; Diverse transfer scenarios;
D O I
暂无
中图分类号
学科分类号
摘要
Transferable deep representations for visual domain adaptation (DA) provides a route to learn from labeled source images to recognize target images without the aid of target-domain supervision. Relevant researches increasingly arouse a great amount of interest due to its potential industrial prospect for non-laborious annotation and remarkable generalization. However, DA presumes source images are identically sampled from a single source while Multi-Source DA (MSDA) is ubiquitous in the real-world. In MSDA, the domain shifts exist not only between source and target domains but also among the sources; especially, the multi-source and target domains may disagree on their semantics (e.g., category shifts). This issue challenges the existing solutions for MSDAs. In this paper, we propose Deep CockTail Network (DCTN), a universal and flexibly-deployed framework to address the problems. DCTN uses a multi-way adversarial learning pipeline to minimize the domain discrepancy between the target and each of the multiple in order to learn domain-invariant features. The derived source-specific perplexity scores measure how similar each target feature appears as a feature from one of source domains. The multi-source category classifiers are integrated with the perplexity scores to categorize target images. We accordingly derive a theoretical analysis towards DCTN, including the interpretation why DCTN can be successful without precisely crafting the source-specific hyper-parameters, and target expected loss upper bounds in terms of domain and category shifts. In our experiments, DCTNs have been evaluated on four benchmarks, whose empirical studies involve vanilla and three challenging category-shift transfer problems in MSDA, i.e., source-shift, target-shift and source-target-shift scenarios. The results thoroughly reveal that DCTN significantly boosts classification accuracies in MSDA and performs extraordinarily to resist negative transfers across different MSDA scenarios.
引用
收藏
页码:2328 / 2351
页数:23
相关论文
共 50 条
  • [1] Deep CockTail Networks A Universal Framework for Visual Multi-source Domain Adaptation
    Chen, Ziliang
    Wei, Pengxu
    Zhuang, Jingyu
    Li, Guanbin
    Lin, Liang
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (08) : 2328 - 2351
  • [2] Deep Cocktail Network: Multi-source Unsupervised Domain Adaptation with Category Shift
    Xu, Ruijia
    Chen, Ziliang
    Zuo, Wangmeng
    Yan, Junjie
    Lin, Liang
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3964 - 3973
  • [3] Universal multi-Source domain adaptation for image classification
    Yin, Yueming
    Yang, Zhen
    Hu, Haifeng
    Wu, Xiaofu
    PATTERN RECOGNITION, 2022, 121
  • [4] Multi-Source Domain Adaptation for Visual Sentiment Classification
    Lin, Chuang
    Zhao, Sicheng
    Meng, Lei
    Chua, Tat-Seng
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 2661 - 2668
  • [5] Multi-source based approach for Visual Domain Adaptation
    Tiwari, Mrinalini
    Sanodiya, Rakesh Kumar
    Mathew, Jimson
    Saha, Sriparna
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] Iterative Refinement for Multi-Source Visual Domain Adaptation
    Wu, Hanrui
    Yan, Yuguang
    Lin, Guosheng
    Yang, Min
    Ng, Michael K.
    Wu, Qingyao
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (06) : 2810 - 2823
  • [7] A Unified Framework for Adversarial Attacks on Multi-Source Domain Adaptation
    Wu, Jun
    He, Jingrui
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (11) : 11039 - 11050
  • [8] Dual contrastive universal adaptation network for multi-source visual recognition
    Cai, Ziyun
    Zhang, Tengfei
    Ma, Fumin
    Jing, Xiao-Yuan
    KNOWLEDGE-BASED SYSTEMS, 2022, 254
  • [9] Deep Joint Semantic Adaptation Network for Multi-source Unsupervised Domain Adaptation
    Cheng, Zhiming
    Wang, Shuai
    Yang, Defu
    Qi, Jie
    Xiao, Mang
    Yan, Chenggang
    PATTERN RECOGNITION, 2024, 151
  • [10] A survey of multi-source domain adaptation
    Sun, Shiliang
    Shi, Honglei
    Wu, Yuanbin
    INFORMATION FUSION, 2015, 24 : 84 - 92