Transformer Based Multi-Source Domain Adaptation

被引:0
|
作者
Wright, Dustin [1 ]
Augenstein, Isabelle [1 ]
机构
[1] Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In practical machine learning settings, the data on which a model must make predictions often come from a different distribution than the data it was trained on. Here, we investigate the problem of unsupervised multi-source domain adaptation, where a model is trained on labelled data from multiple source domains and must make predictions on a domain for which no labelled data has been seen. Prior work with CNNs and RNNs has demonstrated the benefit of mixture of experts, where the predictions of multiple domain expert classifiers are combined; as well as domain adversarial training, to induce a domain agnostic representation space. Inspired by this, we investigate how such methods can be effectively applied to large pretrained transformer models. We find that domain adversarial training has an effect on the learned representations of these models while having little effect on their performance, suggesting that large transformer-based models are already relatively robust across domains. Additionally, we show that mixture of experts leads to significant performance improvements by comparing several variants of mixing functions, including one novel mixture based on attention. Finally, we demonstrate that the predictions of large pretrained transformer based domain experts are highly homogenous, making it challenging to learn effective functions for mixing their predictions.
引用
收藏
页码:7963 / 7974
页数:12
相关论文
共 50 条
  • [41] Riemannian representation learning for multi-source domain adaptation
    Chen, Sentao
    Zheng, Lin
    Wu, Hanrui
    PATTERN RECOGNITION, 2023, 137
  • [42] Multi-Source Domain Adaptation for Visual Sentiment Classification
    Lin, Chuang
    Zhao, Sicheng
    Meng, Lei
    Chua, Tat-Seng
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 2661 - 2668
  • [43] Improved multi-source domain adaptation by preservation of factors
    Schrom, Sebastian
    Hasler, Stephan
    Adamy, Juergen
    IMAGE AND VISION COMPUTING, 2021, 112
  • [44] Multi-Source Domain Adaptation and Fusion for Speaker Verification
    Zhu, Donghui
    Chen, Ning
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2103 - 2116
  • [45] Universal multi-Source domain adaptation for image classification
    Yin, Yueming
    Yang, Zhen
    Hu, Haifeng
    Wu, Xiaofu
    PATTERN RECOGNITION, 2022, 121
  • [46] Iterative Refinement for Multi-Source Visual Domain Adaptation
    Wu, Hanrui
    Yan, Yuguang
    Lin, Guosheng
    Yang, Min
    Ng, Michael K.
    Wu, Qingyao
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (06) : 2810 - 2823
  • [47] Structure-Preserved Multi-Source Domain Adaptation
    Liu, Hongfu
    Shao, Ming
    Fu, Yun
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 1059 - 1064
  • [48] Multi-Source Domain Adaptation with Mixture of Joint Distributions
    Chen, Sentao
    Pattern Recognition, 2024, 149
  • [49] Weighted progressive alignment for multi-source domain adaptation
    Wu, Kunhong
    Li, Liang
    Han, Yahong
    MULTIMEDIA SYSTEMS, 2023, 29 (01) : 117 - 128
  • [50] Multi-Source Domain Adaptation with Mixture of Joint Distributions
    Chen, Sentao
    PATTERN RECOGNITION, 2024, 149