Multi-Source Contribution Learning for Domain Adaptation

被引:33
|
作者
Li, Keqiuyin [1 ]
Lu, Jie [1 ]
Zuo, Hua [1 ]
Zhang, Guangquan [1 ]
机构
[1] Univ Technol Sydney, Ctr Artificial Intelligence, Faulty Engn & Informat Technol, Sydney, NSW 2007, Australia
基金
澳大利亚研究理事会;
关键词
Feature extraction; Task analysis; Transfer learning; Learning systems; Training; Adaptation models; Visualization; Classification; deep learning; domain adaptation; transfer learning;
D O I
10.1109/TNNLS.2021.3069982
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer learning becomes an attractive technology to tackle a task from a target domain by leveraging previously acquired knowledge from a similar domain (source domain). Many existing transfer learning methods focus on learning one discriminator with single-source domain. Sometimes, knowledge from single-source domain might not be enough for predicting the target task. Thus, multiple source domains carrying richer transferable information are considered to complete the target task. Although there are some previous studies dealing with multi-source domain adaptation, these methods commonly combine source predictions by averaging source performances. Different source domains contain different transferable information; they may contribute differently to a target domain compared with each other. Hence, the source contribution should be taken into account when predicting a target task. In this article, we propose a novel multi-source contribution learning method for domain adaptation (MSCLDA). As proposed, the similarities and diversities of domains are learned simultaneously by extracting multi-view features. One view represents common features (similarities) among all domains. Other views represent different characteristics (diversities) in a target domain; each characteristic is expressed by features extracted in a source domain. Then multi-level distribution matching is employed to improve the transferability of latent features, aiming to reduce misclassification of boundary samples by maximizing discrepancy between different classes and minimizing discrepancy between the same classes. Concurrently, when completing a target task by combining source predictions, instead of averaging source predictions or weighting sources using normalized similarities, the original weights learned by normalizing similarities between source and target domains are adjusted using pseudo target labels to increase the disparities of weight values, which is desired to improve the performance of the final target predictor if the predictions of sources exist significant difference. Experiments on real-world visual data sets demonstrate the superiorities of our proposed method.
引用
收藏
页码:5293 / 5307
页数:15
相关论文
共 50 条
  • [1] Riemannian representation learning for multi-source domain adaptation
    Chen, Sentao
    Zheng, Lin
    Wu, Hanrui
    [J]. PATTERN RECOGNITION, 2023, 137
  • [2] Multi-Source Domain Adaptation with Collaborative Learning for Semantic Segmentation
    He, Jianzhong
    Jia, Xu
    Chen, Shuaijun
    Liu, Jianzhuang
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 11003 - 11012
  • [3] Multi-Source Collaborative Contrastive Learning for Decentralized Domain Adaptation
    Wei, Yikang
    Yang, Liu
    Han, Yahong
    Hu, Qinghua
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (05) : 2202 - 2216
  • [4] Multi-Source Distilling Domain Adaptation
    Zhao, Sicheng
    Wang, Guangzhi
    Zhang, Shanghang
    Gu, Yang
    Li, Yaxian
    Song, Zhichao
    Xu, Pengfei
    Hu, Runbo
    Chai, Hua
    Keutzer, Kurt
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 12975 - 12983
  • [5] A survey of multi-source domain adaptation
    Sun, Shiliang
    Shi, Honglei
    Wu, Yuanbin
    [J]. INFORMATION FUSION, 2015, 24 : 84 - 92
  • [6] BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION
    Sun, Shi-Liang
    Shi, Hong-Lei
    [J]. PROCEEDINGS OF 2013 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), VOLS 1-4, 2013, : 24 - 28
  • [7] Multi-Source Survival Domain Adaptation
    Shaker, Ammar
    Lawrence, Carolin
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9752 - 9762
  • [8] Mutual Learning of Joint and Separate Domain Alignments for Multi-Source Domain Adaptation
    Xu, Yuanyuan
    Kan, Meina
    Shan, Shiguang
    Chen, Xilin
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1658 - 1667
  • [9] A Novel Framework for Multi-Source Domain Adaptation with Discriminative Feature Learning
    Jangala, Sampreeth
    Sanodiya, Rakesh Kumar
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [10] Meta Self-Learning for Multi-Source Domain Adaptation: A Benchmark
    Qiu, Shuhao
    Zhu, Chuang
    Zhou, Wenli
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 1592 - 1601