Multiscale capsule networks with attention mechanisms based on domain-invariant properties for cross-domain lifetime prediction

被引:1
|
作者
Shang, Zhiwu [1 ,2 ]
Feng, Zehua [1 ,2 ]
机构
[1] Tiangong Univ, Sch Mech Engn, Tianjin 300387, Peoples R China
[2] Tianjin Modern Electromech Equipment Technol Key L, Tianjin 300387, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-level domain adaptation; Multiscale capsule network; Attention mechanism; Unsupervised learning; MACHINE;
D O I
10.1016/j.dsp.2023.104368
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Remaining useful life prediction (RUL) under variable operating conditions has been a hot research topic in engineering. Still, current unsupervised domain adaptive (UDA) methods all use a single metric (MK-MMD or adversarial mechanism), which somewhat improves the cross-domain RUL prediction performance. However, using MK-MMD or the adversarial mechanism alone to measure the inter-domain differences has a single perspective. At the same time, MK-MMD focuses on measuring the differences in the feature distribution layer and fails to capture the differences in the nonlinear relationship between the source and target domains. In addition, learning high-quality degraded features is also a vital issue in RUL prediction. Based on the above points, this paper proposes an attention mechanism multiscale capsule network cross-domain lifetime prediction method (DI-MCNAM) based on domain invariant properties. First, the attention mechanism multiscale capsule network (MCNAM) extracts the deep degradation features of the weighted data. Second, two modules, multilevel domain adaptation and domain classifier, are integrated to reduce inter-domain differences from different perspectives. The maximum information coefficient (MIC) is introduced in the multilevel domain adaptation module. MIC can learn the nonlinear relationship between source and target domain data over time separately, combined with MK-MMD, to reduce the inter-domain feature differences from different levels. Domain classifiers discriminate the confusing domains and reduce inter-domain differences by minimizing the classification power. The performance of the proposed model is validated using publicly available datasets and compared with existing popular methods, and the final results show that the proposed method has high prediction accuracy
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Cross-domain adaptation network based on attention mechanism for tool wear prediction
    He, Jianliang
    Sun, Yuxin
    Yin, Chen
    He, Yan
    Wang, Yulin
    JOURNAL OF INTELLIGENT MANUFACTURING, 2023, 34 (08) : 3365 - 3387
  • [22] DITN: User's indirect side-information involved domain-invariant feature transfer network for cross-domain recommendation
    Ni, Xin
    Nie, Jie
    Zuo, Zijie
    Xie, Huaxin
    Liang, Xinyue
    Jiang, Mingxing
    Xu, Jianliang
    Yu, Shusong
    Liu, Min
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (06)
  • [23] Semantic invariant cross-domain image generation with generative adversarial networks
    Mao, Xiaofeng
    Wang, Shuhui
    Zheng, Liying
    Huang, Qingming
    NEUROCOMPUTING, 2018, 293 : 55 - 63
  • [24] Domain-invariant attention network for transfer learning between cross-scene hyperspectral images
    Ye, Minchao
    Wang, Chenglong
    Meng, Zhihao
    Xiong, Fengchao
    Qian, Yuntao
    IET COMPUTER VISION, 2023, 17 (07) : 739 - 749
  • [25] A cross-domain user association scheme based on graph attention networks with trajectory embedding
    Cen, Keqing
    Yang, Zhenghao
    Wang, Ze
    Dong, Minhong
    MACHINE LEARNING, 2024, : 7905 - 7930
  • [26] Domain adaptation based on domain-invariant and class-distinguishable feature learning using multiple adversarial networks
    Fan, Cangning
    Liu, Peng
    Xiao, Ting
    Zhao, Wei
    Tang, Xianglong
    NEUROCOMPUTING, 2020, 411 : 178 - 192
  • [27] Cross-Domain Adaptative Learning for Online Advertisement Customer Lifetime Value Prediction
    Su, Hongzu
    Du, Zhekai
    Li, Jingjing
    Zhu, Lei
    Lu, Ke
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4605 - 4613
  • [28] Hypergraph and cross-attention-based unsupervised domain adaptation framework for cross-domain myocardial infarction localization
    Yuan, Shuaiying
    He, Ziyang
    Zhao, Jianhui
    Yuan, Zhiyong
    Alhudhaif, Adi
    Alenezi, Fayadh
    INFORMATION SCIENCES, 2023, 633 : 245 - 263
  • [29] Cross-Domain Joint Object Detection Based on Visual Selective Attention
    Shi, Yi
    Gao, Gangyao
    Qin, Long
    Yan, Hongmei
    INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2021, 168 : S134 - S134
  • [30] An attention network based on feature sequences for cross-domain sentiment classification
    Meng, Jiana
    Dong, Yu
    Long, Yingchun
    Zhao, Dandan
    INTELLIGENT DATA ANALYSIS, 2021, 25 (03) : 627 - 640