Representation learning for unsupervised domain adaptation

被引:0
|
作者
Xu Y. [1 ]
Yan H. [1 ]
机构
[1] College of Electronics and Information Engineering, Sichuan University, Chengdu
关键词
Distributional divergence; Pattern recognition; Representation learning; Transfer learning; Unsupervised domain adaptation;
D O I
10.11918/202006133
中图分类号
学科分类号
摘要
Among the conventional training methods of pattern recognition, the supervised learning methods with abundant labels have achieved significant performance in recognition accuracy. However, in real life, there are problems that samples often lack labels, or existing labeled samples cannot be used directly due to the distributional divergences of the target samples. To resolve these issues, unsupervised domain adaptation is often applied to recognize samples of unlabeled target domain by taking advantage of the data of the source domain with sufficient labels but different distributions. Considering the situation that the distributions of the target recognition samples and the source training samples are different, an optimal representation learning method for unsupervised domain adaption was proposed. Two representation matrices were introduced to the common subspace of the domain samples to better reduce the distributional divergence of the domains. Then, optimization constraints were implemented on the two representation matrices, so as to make the source domain and the target domain optimally represent each other, thereby reducing the distributional divergence between the domains. In this way, the unlabeled target domain samples could be recognized by the fully labeled source domain samples (i. e., transfer learning). Experiments on three common unsupervised domain adaption datasets show that the proposed method outperformed the conventional transfer learning methods and deep learning methods in recognition accuracy, which verifies the validity and robustness of the proposed algorithm. Copyright ©2021 Journal of Harbin Institute of Technology.All rights reserved.
引用
收藏
页码:40 / 46
页数:6
相关论文
共 26 条
  • [1] LIU Hong, LONG Mingsheng, WANG Jianmin, Et al., Transferable adversarial training: a general approach to adapting deep classifiers, International Conference on Machine Learning, (2019)
  • [2] CHEN Yu, YANG Chunling, ZHANG Yan, Et al., Deep conditional adaptation networks and label correlation transfer for unsupervised domain adaptation, Pattern Recognition, 98, (2020)
  • [3] ZHANG Lei, WANG Shanshan, HUANG Guangbin, Et al., Manifold criterion guided transfer learning via intermediate domain generation, IEEE Transactions on Neural Networks and Learning Systems, 30, 12, (2019)
  • [4] PEREIRA L A, TORRES R D S., Semi-supervised transfer subspace for domain adaptation, Pattern Recognition, 75, (2018)
  • [5] GRETTON A, BORGWARDT K, RASCH M, Et al., A kernel method for the two-sample-problem, International Conference on Neural Information Processing Systems, (2006)
  • [6] PAN S J, TSANG I W, KWOK J T, Et al., Domain adaptation via transfer component analysis, IEEE Transactions on Neural Networks, 22, 2, (2011)
  • [7] LONG Mingsheng, WANG Jianmin, DING Guiguang, Et al., Transfer joint matching for unsupervised domain adaptation, 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014)
  • [8] LONG Mingsheng, WANG Jianmin, DING Guiguang, Et al., Transfer feature learning with joint distribution adaptation, 2013 IEEE International Conference on Computer Vision, (2013)
  • [9] ZHANG Jing, LI Wanqing, OGUNBONA P., Joint geometrical and statistical alignment for visual domain adaptation, 2017 IEEE Conference on Computer Vision and Pattern Recognition, (2017)
  • [10] LONG Mingsheng, WANG Jianmin, DING Guiguang, Et al., Adaptation regularization: a general framework for transfer learning, IEEE Transactions on Knowledge and Data Engineering, 26, 5, (2014)