Representation Learning and Knowledge Distillation for Lightweight Domain Adaptation

被引:0
|
作者
Bin Shah, Sayed Rafay [1 ]
Putty, Shreyas Subhash [1 ]
Schwung, Andreas [1 ]
机构
[1] South Westphalia Univ Appl Sci, Dept Elect Power Engn, Soest, Germany
关键词
Unsupervised domain adaptation; maximum mean discrepancy; knowledge distillation; representation learning; remaining useful lifetime estimation; C-MAPSS;
D O I
10.1109/CAI59869.2024.00214
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In industrial machine learning applications, insufficient data, lack of labeling, distribution shift between subsets, varying operational conditions, etc. result in poor generalizing performance by pre-trained neural network models across domains. In contrast to image detection tasks, time series dataset contain critical domain-specific characteristics that must be learned by the corresponding networks. Naively aligning the learned representations during the adaptation process increases the risk of losing these key information, thus resulting in poor performance. This paper proposes a lightweight domain adaptation method involving representation learning and knowledge distillation (RepLKD). A separate network is pre-trained to learn valuable information from the target data in its latent space with the help of a reconstructor. In the adaptation stage, we use maximum mean discrepancy to minimize the difference in distributions between the source and target latent space. Additionally, we implement knowledge distillation to encourage the target network to generate source-like latent embedding and penalize only when an upper-bound condition is not fulfilled to prevent over-regularization and loss of domain-specific features. Finally, we test our proposed method on 12 cross-domain scenarios with the C-MAPSS dataset and compare the efficacy of our method against existing literature methods.
引用
收藏
页码:1202 / 1207
页数:6
相关论文
共 50 条
  • [31] Knowledge distillation approach for skin cancer classification on lightweight deep learning model
    Saha, Suman
    Hemal, Md. Moniruzzaman
    Eidmum, Md. Zunead Abedin
    Mridha, Muhammad Firoz
    Healthcare Technology Letters, 2025, 12 (01)
  • [32] LEALLA: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation
    Mao, Zhuoyuan
    Nakagawa, Tetsuji
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1886 - 1894
  • [33] Learning knowledge representation with meta knowledge distillation for single image super-resolution
    Zhu, Han
    Chen, Zhenzhong
    Liu, Shan
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [34] Learning Voice Representation Using Knowledge Distillation for Automatic Voice Casting
    Gresse, Adrien
    Quillot, Mathias
    Dufour, Richard
    Bonastre, Jean-Francois
    INTERSPEECH 2020, 2020, : 160 - 164
  • [35] FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and Representation Learning
    Kim, Minha
    Tariq, Shahroz
    Woo, Simon S.
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 1001 - 1012
  • [36] Application of Knowledge Distillation to Multi-Task Speech Representation Learning
    Kerpicci, Mine
    Van Nguyen
    Zhang, Shuhua
    Visser, Erik
    INTERSPEECH 2023, 2023, : 2813 - 2817
  • [37] SGKD: A Scalable and Effective Knowledge Distillation Framework for Graph Representation Learning
    He, Yufei
    Ma, Yao
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 666 - 673
  • [38] FedRCIL: Federated Knowledge Distillation for Representation based Contrastive Incremental Learning
    Psaltis, Athanasios
    Chatzikonstantinou, Christos
    Patrikakis, Charalampos Z.
    Daras, Petros
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3455 - 3464
  • [39] Prototype-Decomposed Knowledge Distillation for Learning Generalized Federated Representation
    Wu, Aming
    Yu, Jiaping
    Wang, Yuxuan
    Deng, Cheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 10991 - 11002
  • [40] Representation learning via an integrated autoencoder for unsupervised domain adaptation
    Yi ZHU
    Xindong WU
    Jipeng QIANG
    Yunhao YUAN
    Yun LI
    Frontiers of Computer Science, 2023, 17 (05) : 77 - 89