Learning Time Series Counterfactuals via Latent Space Representations

被引:7
|
作者
Wang, Zhendong [1 ]
Samsten, Isak [1 ]
Mochaourab, Rami [2 ]
Papapetrou, Panagiotis [1 ]
机构
[1] Stockholm Univ, Stockholm, Sweden
[2] RISE Res Inst Sweden, Stockholm, Sweden
来源
DISCOVERY SCIENCE (DS 2021) | 2021年 / 12986卷
关键词
Time series classification; Interpretability; Counterfactual explanations; Deep learning;
D O I
10.1007/978-3-030-88942-5_29
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Counterfactual explanations can provide sample-based explanations of features required to modify from the original sample to change the classification result from an undesired state to a desired state; hence it provides interpretability of the model. Previous work of LatentCF presents an algorithm for image data that employs auto-encoder models to directly transform original samples into counterfactuals in a latent space representation. In our paper, we adapt the approach to time series classification and propose an improved algorithm named LatentCF++ which introduces additional constraints in the counterfactual generation process. We conduct an extensive experiment on a total of 40 datasets from the UCR archive, comparing to current state-of-the-art methods. Based on our evaluation metrics, we show that the LatentCF++ framework can with high probability generate valid counterfactuals and achieve comparable explanations to current state-of-the-art. Our proposed approach can also generate counterfactuals that are considerably closer to the decision boundary in terms of margin difference.
引用
收藏
页码:369 / 384
页数:16
相关论文
共 50 条
  • [21] Time-Series Information and Unsupervised Learning of Representations
    Ryabko, Daniil
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2020, 66 (03) : 1702 - 1713
  • [22] Time-series learning of latent-space dynamics for reduced-order model closure
    Maulik, Romit
    Mohan, Arvind
    Lusch, Bethany
    Madireddy, Sandeep
    Balaprakash, Prasanna
    Livescu, Daniel
    PHYSICA D-NONLINEAR PHENOMENA, 2020, 405
  • [23] FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space
    Liu, Shengzhong
    Kimura, Tomoyoshi
    Liu, Dongxin
    Wang, Ruijie
    Li, Jinyang
    Diggavi, Suhas
    Srivastava, Mani
    Abdelzaher, Tarek
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [24] Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations
    Meng, Yu
    Zhang, Yunyi
    Huang, Jiaxin
    Zhang, Yu
    Han, Jiawei
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 3143 - 3152
  • [25] Orthogonality-Enforced Latent Space in Autoencoders: An Approach to Learning Disentangled Representations
    Cha, Jaehoon
    Thiyagalingam, Jeyan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [26] Learning Latent Space Representations to Predict Patient Outcomes: Model Development and Validation
    Rongali, Subendhu
    Rose, Adam J.
    McManus, David D.
    Bajracharya, Adarsha S.
    Kapoor, Alok
    Granillo, Edgard
    Yu, Hong
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2020, 22 (03)
  • [27] Multiview Clustering via Proximity Learning in Latent Representation Space
    Liu, Bao-Yu
    Huang, Ling
    Wang, Chang-Dong
    Lai, Jian-Huang
    Yu, Philip S.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (02) : 973 - 986
  • [28] Zero-Shot Learning via Latent Space Encoding
    Yu, Yunlong
    Ji, Zhong
    Guo, Jichang
    Zhang, Zhongfei
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (10) : 3755 - 3766
  • [29] Cross-Domain Ranking via Latent Space Learning
    Tang, Jie
    Hall, Wendy
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2618 - 2624
  • [30] Deterring the Gray Market: Product Diversion Detection via Learning Disentangled Representations of Multivariate Time Series
    Lin, Hao
    Liu, Guannan
    Wu, Junjie
    Zhao, J. Leon
    INFORMS JOURNAL ON COMPUTING, 2024, 36 (02) : 571 - 586