Self-Paced Learning for Open-Set Domain Adaptation

被引:0
|
作者
Liu X. [1 ,2 ]
Zhou Y. [1 ,2 ]
Zhou T. [3 ]
Qin J. [4 ]
机构
[1] School of Computer Science and Engineering, Southeast University, Nanjing
[2] Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing
[3] School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing
[4] College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing
基金
中国国家自然科学基金;
关键词
object recognition; open-set domain adaptation; self-paced learning; transfer learning; unsupervised domain adaptation;
D O I
10.7544/issn1000-1239.202330210
中图分类号
学科分类号
摘要
Domain adaptation tackles the challenge of generalizing knowledge acquired from a source domain to a target domain with different data distributions. Traditional domain adaptation methods presume that the classes in the source and target domains are identical, which is not always the case in real-world scenarios. Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain. OSDA aims to not only recognize target samples belonging to known classes shared by source and target domains but also perceive unknown class samples. Traditional domain adaptation methods aim to align the entire target domain with the source domain to minimize domain shift, which inevitably leads to negative transfer in open-set domain adaptation scenarios. We propose a novel framework based on self-paced learning to distinguish known and unknown class samples precisely, referred to as SPL-OSDA (self-paced learning for open-set domain adaptation). To utilize unlabeled target samples for self-paced learning, we generate pseudo labels and design a cross-domain mixup method tailored for OSDA scenarios. This strategy minimizes the noise from pseudo labels and ensures our model progressively to learn known class features of the target domain, beginning with simpler examples and advancing to more complex ones. To improve the reliability of the model in open-set scenarios to meet the requirements of trustworthy AI, multiple criteria are utilized in this paper to distinguish between known and unknown samples. Furthermore, unlike existing OSDA methods that require manual hyperparameter threshold tuning to separate known and unknown classes, our propused method self-tunes a suitable threshold, eliminating the need for empirical tuning during testing. Compared with empirical threshold tuning, our model exhibits good robustness under different hyperparameters and experimental settings. Comprehensive experiments illustrate that our method consistently achieves superior performance on different benchmarks compared with various state-of-the-art methods. © 2023 Science Press. All rights reserved.
引用
收藏
页码:1711 / 1726
页数:15
相关论文
共 46 条
  • [1] He Kaiming, Zhang Xiangyu, Ren Shaoqing, Et al., Deep residual learning for image recognition[C], Proc of the 29th IEEE Conf on Computer Vision and Pattern Recognition, pp. 770-778, (2016)
  • [2] Simonyan K, Zisserman A., Very deep convolutional networks for large-scale image recognition, (2014)
  • [3] Shaoqing Ren, Kaiming He, Girshick R B, Et al., Faster R-CNN: Towards real-time object detection with region proposal networks[C], Proc of the 29th Int Conf on Neural Information Processing Systems, pp. 91-99, (2015)
  • [4] Kaiming He, Gkioxari G, Dolla r P, Et al., Mask R-CNN[C], Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition, pp. 2980-2988, (2017)
  • [5] Quinonero-Candela J, Sugiyama M, Schwaighofer A, Et al., Dataset Shift in Machine Learning, (2008)
  • [6] Sinno Jialin Pan, Qiang Yang, A survey on transfer learning[J], IEEE Transactions on Knowledge and Data Engineering, 22, 10, pp. 1345-1359, (2010)
  • [7] Mingsheng Long, Yue Cao, Wang Jianmin, Et al., Learning transferable features with deep adaptation networks[C], Proc of the 32nd Int Conf on Machine Learning, pp. 97-105, (2015)
  • [8] Ganin Y, Ustinova E, Ajakan H, Et al., Domain-adversarial training of neural networks[J], Journal of Machine Learning Research, 17, 59, pp. 1-35, (2016)
  • [9] Jindong Wang, Yiqiang Chen, Feng Wenjie, Et al., Transfer learning with dynamic distribution adaptation[J], ACM Transactions on Intelligent Systems and Technology, 11, 1, pp. 6:1-6:25, (2020)
  • [10] Chaohui Yu, Wang Jindong, Chen Yiqiang, Et al., Transfer learning with dynamic adversarial adaptation network[C], Proc of the 19th IEEE Conf on Int Conf on Data Mining, pp. 778-786, (2019)