Fuzzy Shared Representation Learning for Multistream Classification

被引:0
|
作者
Yu, En [1 ]
Lu, Jie [1 ]
Zhang, Guangquan [1 ]
机构
[1] Univ Technol Sydney, Australian Artificial Intelligence Inst AAII, Fac Engn & Informat Technol, Decis Syst & E Serv Intelligence Lab, Ultimo, NSW 2007, Australia
基金
澳大利亚研究理事会;
关键词
Concept drift; Fuzzy systems; Adaptation models; Data models; Uncertainty; Task analysis; Monitoring; fuzzy systems; multistream classification; transfer learning; CONCEPT DRIFT DETECTION; MODEL;
D O I
10.1109/TFUZZ.2024.3423024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multistream classification aims to predict the target stream by transferring knowledge from labeled source streams amid nonstationary processes with concept drifts. While existing methods address label scarcity, covariate shift, and asynchronous concept drift, they focus solely on the original feature space, neglecting the influence of redundant or low-quality features with uncertainties. Therefore, the advancement of this task is still challenged by how to: 1) ensure guaranteed joint representations of different streams, 2) grapple with uncertainty and interpretability during knowledge transfer, and 3) track and adapt the asynchronous drifts in each stream. To address these challenges, we propose an interpretable fuzzy shared representation learning (FSRL) method based on the Takagi-Sugeno-Kang (TSK) fuzzy system. Specifically, FSRL accomplishes the nonlinear transformation of individual streams by learning the fuzzy mapping with the antecedents of the TSK fuzzy system, thereby effectively preserving discriminative information for each original stream in an interpretable way. Then, a multistream joint distribution adaptation algorithm is proposed to optimize the consequent part of the TSK fuzzy system, which learns the final fuzzy shared representations for different streams. Hence, this method concurrently investigates both the commonalities across streams and the distinctive information within each stream. Following that, window-based and GMM-based online adaptation strategies are designed to address the asynchronous drifts over time. The former can directly demonstrate the effectiveness of FSRL in knowledge transfer across multiple streams, while the GMM-based method offers an informed way to overcome the asynchronous drift problem by integrating drift detection and adaptation. Finally, extensive experiments on several synthetic and real-world benchmarks with concept drift demonstrate the proposed method's effectiveness and efficiency.
引用
收藏
页码:5625 / 5637
页数:13
相关论文
共 50 条
  • [41] MEMBERSHIP FUNCTION LEARNING IN FUZZY CLASSIFICATION
    KWAN, HK
    CAI, YL
    ZHANG, B
    INTERNATIONAL JOURNAL OF ELECTRONICS, 1993, 74 (06) : 845 - 850
  • [42] Generalized Zero-Shot Image Classification via Partially-Shared Multi-Task Representation Learning
    Wang, Gerui
    Tang, Sheng
    ELECTRONICS, 2023, 12 (09)
  • [43] Sequence-to-Sequence Learning via Shared Latent Representation
    Shen, Xu
    Tian, Xinmei
    Xing, Jun
    Rui, Yong
    Tao, Dacheng
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 2395 - 2402
  • [44] Leveraging sparse and shared feature activations for disentangled representation learning
    Fumero, Marco
    Wenzel, Florian
    Zancato, Luca
    Achille, Alessandro
    Rodola, Emanuele
    Soatto, Stefano
    Scholkopf, Bernhard
    Locatello, Francesco
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [45] Learning Multi-view Generator Network for Shared Representation
    Han, Tian
    Xing, Xianglei
    Wu, Ying Nian
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2062 - 2068
  • [46] Deep Modality Invariant Adversarial Network for Shared Representation Learning
    Saito, Kuniaki
    Mukuta, Yusuke
    Ushiku, Yoshitaka
    Harada, Tatsuya
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 2623 - 2629
  • [47] Multimodal deep representation learning for video classification
    Tian, Haiman
    Tao, Yudong
    Pouyanfar, Samira
    Chen, Shu-Ching
    Shyu, Mei-Ling
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2019, 22 (03): : 1325 - 1341
  • [48] Graph representation learning for road type classification
    Gharaee, Zahra
    Kowshik, Shreyas
    Stromann, Oliver
    Felsberg, Michael
    PATTERN RECOGNITION, 2021, 120
  • [49] Temporal representation learning for time series classification
    Hu, Yupeng
    Zhan, Peng
    Xu, Yang
    Zhao, Jia
    Li, Yujun
    Li, Xueqing
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (08): : 3169 - 3182
  • [50] Representation Learning for Cross-Modality Classification
    van Tulder, Gijs
    de Bruijne, Marleen
    MEDICAL COMPUTER VISION AND BAYESIAN AND GRAPHICAL MODELS FOR BIOMEDICAL IMAGING, 2017, 10081 : 126 - 136