Causal Factor Disentanglement for Few-Shot Domain Adaptation in Video Prediction

被引:0
|
作者
Cornille, Nathan [1 ]
Laenen, Katrien [1 ]
Sun, Jingyuan [1 ]
Moens, Marie-Francine [1 ]
机构
[1] Katholieke Univ Leuven, Dept Comp Sci, Language Intelligence & Informat Retrieval LIIR La, B-3001 Leuven, Belgium
基金
欧洲研究理事会;
关键词
causal representation learning; video prediction; transfer learning; few-shot learning;
D O I
10.3390/e25111554
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
An important challenge in machine learning is performing with accuracy when few training samples are available from the target distribution. If a large number of training samples from a related distribution are available, transfer learning can be used to improve the performance. This paper investigates how to do transfer learning more effectively if the source and target distributions are related through a Sparse Mechanism Shift for the application of next-frame prediction. We create Sparse Mechanism Shift-TempoRal Intervened Sequences (SMS-TRIS), a benchmark to evaluate transfer learning for next-frame prediction derived from the TRIS datasets. We then propose to exploit the Sparse Mechanism Shift property of the distribution shift by disentangling the model parameters with regard to the true causal mechanisms underlying the data. We use the Causal Identifiability from TempoRal Intervened Sequences (CITRIS) model to achieve this disentanglement via causal representation learning. We show that encouraging disentanglement with the CITRIS extensions can improve performance, but their effectiveness varies depending on the dataset and backbone used. We find that it is effective only when encouraging disentanglement actually succeeds in increasing disentanglement. We also show that an alternative method designed for domain adaptation does not help, indicating the challenging nature of the SMS-TRIS benchmark.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Causal representation for few-shot text classification
    Yang, Maoqin
    Zhang, Xuejie
    Wang, Jin
    Zhou, Xiaobing
    APPLIED INTELLIGENCE, 2023, 53 (18) : 21422 - 21432
  • [32] Causal representation for few-shot text classification
    Maoqin Yang
    Xuejie Zhang
    Jin Wang
    Xiaobing Zhou
    Applied Intelligence, 2023, 53 : 21422 - 21432
  • [33] StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation
    Alanov, Aibek
    Titov, Vadim
    Nakhodnov, Maksim
    Vetrov, Dmitry
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2184 - 2194
  • [34] Few-Shot Link Prediction with Domain-Agnostic Graph Embedding
    Zhu, Hao
    Das, Mahashweta
    Bendre, Mangesh
    Wang, Fei
    Yang, Hao
    Hassoun, Soha
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 659 - 664
  • [35] PDA: Proxy-based domain adaptation for few-shot image recognition
    Liu, Ge
    Zhao, Linglan
    Fang, Xiangzhong
    IMAGE AND VISION COMPUTING, 2021, 110
  • [36] Few-shot time-series anomaly detection with unsupervised domain adaptation
    Li, Hongbo
    Zheng, Wenli
    Tang, Feilong
    Zhu, Yanmin
    Huang, Jielong
    INFORMATION SCIENCES, 2023, 649
  • [37] AsyFOD: An Asymmetric Adaptation Paradigm for Few-Shot Domain Adaptive Object Detection
    Gao, Yipeng
    Lin, Kun-Yu
    Yan, Junkai
    Wang, Yaowei
    Zheng, Wei-Shi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 3261 - 3271
  • [38] HARDMIX: A REGULARIZATION METHOD TO MITIGATE THE LARGE SHIFT IN FEW-SHOT DOMAIN ADAPTATION
    Liang, Ziyun
    Gu, Yun
    Yang, Jie
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 454 - 458
  • [39] Prompt-induced prototype alignment for few-shot unsupervised domain adaptation
    Li, Yongguang
    Long, Sifan
    Wang, Shengsheng
    Zhao, Xin
    Li, Yiyang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 269
  • [40] Domain adversarial adaptation framework for few-shot QoT estimation in optical networks
    Cai, Zhuojun
    Wang, Qihang
    Deng, Yubin
    Zhang, Peng
    Zhou, Gai
    Li, Yang
    Khan, Faisal Nadeem
    JOURNAL OF OPTICAL COMMUNICATIONS AND NETWORKING, 2024, 16 (11) : 1133 - 1144