Representation Learning and Knowledge Distillation for Lightweight Domain Adaptation

被引:0
|
作者
Bin Shah, Sayed Rafay [1 ]
Putty, Shreyas Subhash [1 ]
Schwung, Andreas [1 ]
机构
[1] South Westphalia Univ Appl Sci, Dept Elect Power Engn, Soest, Germany
关键词
Unsupervised domain adaptation; maximum mean discrepancy; knowledge distillation; representation learning; remaining useful lifetime estimation; C-MAPSS;
D O I
10.1109/CAI59869.2024.00214
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In industrial machine learning applications, insufficient data, lack of labeling, distribution shift between subsets, varying operational conditions, etc. result in poor generalizing performance by pre-trained neural network models across domains. In contrast to image detection tasks, time series dataset contain critical domain-specific characteristics that must be learned by the corresponding networks. Naively aligning the learned representations during the adaptation process increases the risk of losing these key information, thus resulting in poor performance. This paper proposes a lightweight domain adaptation method involving representation learning and knowledge distillation (RepLKD). A separate network is pre-trained to learn valuable information from the target data in its latent space with the help of a reconstructor. In the adaptation stage, we use maximum mean discrepancy to minimize the difference in distributions between the source and target latent space. Additionally, we implement knowledge distillation to encourage the target network to generate source-like latent embedding and penalize only when an upper-bound condition is not fulfilled to prevent over-regularization and loss of domain-specific features. Finally, we test our proposed method on 12 cross-domain scenarios with the C-MAPSS dataset and compare the efficacy of our method against existing literature methods.
引用
收藏
页码:1202 / 1207
页数:6
相关论文
共 50 条
  • [11] Knowledge Distillation for Semi-supervised Domain Adaptation
    Orbes-Arteainst, Mauricio
    Cardoso, Jorge
    Sorensen, Lauge
    Igel, Christian
    Ourselin, Sebastien
    Modat, Marc
    Nielsen, Mads
    Pai, Akshay
    OR 2.0 CONTEXT-AWARE OPERATING THEATERS AND MACHINE LEARNING IN CLINICAL NEUROIMAGING, 2019, 11796 : 68 - 76
  • [12] Joint Progressive Knowledge Distillation and Unsupervised Domain Adaptation
    Nguyen-Meidine, Le Thanh
    Granger, Eric
    Kiran, Madhu
    Dolz, Jose
    Blais-Morin, Louis-Antoine
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [13] Representation learning for unsupervised domain adaptation
    Xu Y.
    Yan H.
    Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2021, 53 (02): : 40 - 46
  • [14] Domain Adaptation Transfer Learning by Kernel Representation Adaptation
    Chen, Xiaoyi
    Lengelle, Regis
    PATTERN RECOGNITION APPLICATIONS AND METHODS, 2018, 10857 : 45 - 61
  • [15] Bilateral Knowledge Distillation for Unsupervised Domain Adaptation of Semantic Segmentation
    Wang, Yunnan
    Li, Jianxun
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10177 - 10184
  • [16] DOMAIN ADAPTATION OF DNN ACOUSTIC MODELS USING KNOWLEDGE DISTILLATION
    Asami, Taichi
    Masumura, Ryo
    Yamaguchi, Yoshikazu
    Masataki, Hirokazu
    Aono, Yushi
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5185 - 5189
  • [17] Learning Smooth Representation for Unsupervised Domain Adaptation
    Cai, Guanyu
    He, Lianghua
    Zhou, MengChu
    Alhumade, Hesham
    Hu, Die
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4181 - 4195
  • [18] Learning Disentangled Semantic Representation for Domain Adaptation
    Cai, Ruichu
    Li, Zijian
    Wei, Pengfei
    Qiao, Jie
    Zhang, Kun
    Hao, Zhifeng
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2060 - 2066
  • [19] Probabilistic Knowledge Transfer for Lightweight Deep Representation Learning
    Passalis, Nikolaos
    Tzelepi, Maria
    Tefas, Anastasios
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (05) : 2030 - 2039
  • [20] Incorporating Domain and Sentiment Supervision in Representation Learning for Domain Adaptation
    Liu, Biao
    Huang, Minlie
    Sun, Jiashen
    Zhu, Xuan
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 1277 - 1283