Multimodal Decoupled Representation With Compatibility Learning for Explicit Nonstationary Process Monitoring

被引:3
|
作者
Song, Pengyu [1 ]
Zhao, Chunhui [1 ]
Ding, Jinliang [2 ]
Zhao, Shunyi [3 ]
机构
[1] Zhejiang Univ, Coll Control Sci & Engn, Hangzhou 310027, Peoples R China
[2] Northeastern Univ, State Key Lab Synthet Automat Proc Ind, Shenyang 110819, Peoples R China
[3] Jiangnan Univ, Key Lab Adv Proc Controlfor Light Ind, Minist Educ, Wuxi 214122, Peoples R China
基金
国家重点研发计划;
关键词
Cascaded learning architecture; compat-ibility learning; explicit process monitoring; information refinement; multimodal decoupled representation; nonstationarity; NEURAL-NETWORKS; COINTEGRATION; STATIONARY;
D O I
10.1109/TIE.2023.3299013
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Frequent switching of operating conditions in industrial processes tends to make the data distribution time varying and variable correlations nonuniform, which brings considerable challenges for explicit representation and monitoring of nonstationary processes. This study addresses the abovementioned problem based on the following recognitions: 1) despite changes in operating conditions, there exist time-invariant process mechanisms, which are nonlinearly coupled with nonstationarity; 2) such a coupling relationship is not uniform but may show diverse modes changing with time, defined as multimodal coupling here; 3) diverse coupling relations can be derived from the superposition of nonstationarity induced by definite driving forces, i.e., changeable operating condition settings, reflecting their intrinsic association under complex changes. Thereupon, a cascaded deep information separation (CDIS) architecture with a compatibility learning algorithm is proposed to extract multimodal decoupled representations. We design an information refinement module (IRM) to capture the basic coupling source (BCS) under the influence of driving forces, where a shortcut connection is incorporated into the nonlinear autoencoding structure. Multiple IRMs can be flexibly cascaded to achieve the superposition of BCSs, thus, portraying diverse multimodal couplings. Furthermore, by balancing the stationarity requirement and reconstruction constraint, the designed compatibility learning algorithm induces the cascaded IRMs to capture and filter out nonstationarity, thus, obtaining stationary refined data. In this way, stationarity and nonstationarity with multimodal couplings can be fully separated. The validity of CDIS is illustrated through a simulated example and a real condensing system experimental rig.
引用
收藏
页码:8121 / 8131
页数:11
相关论文
共 50 条
  • [41] Survey on Multimodal Visual Language Representation Learning
    Du P.-F.
    Li X.-Y.
    Gao Y.-L.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (02): : 327 - 348
  • [42] Adapt and explore: Multimodal mixup for representation learning
    Lin, Ronghao
    Hu, Haifeng
    INFORMATION FUSION, 2024, 105
  • [43] Multimodal deep representation learning for video classification
    Tian, Haiman
    Tao, Yudong
    Pouyanfar, Samira
    Chen, Shu-Ching
    Shyu, Mei-Ling
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2019, 22 (03): : 1325 - 1341
  • [44] Multimodal Contrastive Training for Visual Representation Learning
    Yuan, Xin
    Lin, Zhe
    Kuen, Jason
    Zhang, Jianming
    Wang, Yilin
    Maire, Michael
    Kale, Ajinkya
    Faieta, Baldo
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 6991 - 7000
  • [45] MULTIMODAL REPRESENTATION LEARNING: ADVANCES, TRENDS AND CHALLENGES
    Zhang, Su-Fang
    Zhai, Jun-Hai
    Xie, Bo-Jun
    Zhan, Yan
    Wang, Xin
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), 2019, : 362 - 367
  • [46] The Effects of Unimodal Representation Choices on Multimodal Learning
    Ito, Fernando Tadao
    Caseli, Helena de Medeiros
    Moreira, Jander
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 2119 - 2126
  • [47] Disentangled Representation Learning for Multimodal Emotion Recognition
    Yang, Dingkang
    Huang, Shuai
    Kuang, Haopeng
    Du, Yangtao
    Zhang, Lihua
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 1642 - 1651
  • [48] Fundamental Considerations on Representation Learning for Multimodal Processing
    Jin'no, Kenya
    Izumi, Masato
    Okamoto, Saki
    Dai, Mizuki
    Takahashi, Chisato
    Inami, Tatsuro
    HUMAN INTERFACE AND THE MANAGEMENT OF INFORMATION, HIMI 2023, PT I, 2023, 14015 : 389 - 399
  • [49] Multimodal Representation Learning for Recommendation in Internet of Things
    Huang, Zhenhua
    Xu, Xin
    Ni, Juan
    Zhu, Honghao
    Wang, Cheng
    IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (06) : 10675 - 10685
  • [50] Robust Multimodal Learning via Representation Decoupling
    Wei, Shicai
    Luo, Yang
    Wang, Yuji
    Luo, Chunbo
    COMPUTER VISION - ECCV 2024, PT XLII, 2025, 15100 : 38 - 54