Multimodal meta-learning through meta-learned task representations

被引:0
|
作者
Vettoruzzo, Anna [1 ]
Bouguelia, Mohamed-Rafik [1 ]
Rognvaldsson, Thorsteinn [1 ]
机构
[1] Halmstad Univ, Ctr Appl Intelligent Syst Res CAISR, Halmstad, Sweden
来源
NEURAL COMPUTING & APPLICATIONS | 2024年 / 36卷 / 15期
关键词
Meta-learning; Few-shot learning; Transfer learning; Task representation; Multimodal distribution;
D O I
10.1007/s00521-024-09540-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot meta-learning involves training a model on multiple tasks to enable it to efficiently adapt to new, previously unseen tasks with only a limited number of samples. However, current meta-learning methods assume that all tasks are closely related and belong to a common domain, whereas in practice, tasks can be highly diverse and originate from multiple domains, resulting in a multimodal task distribution. This poses a challenge for existing methods as they struggle to learn a shared representation that can be easily adapted to all tasks within the distribution. To address this challenge, we propose a meta-learning framework that can handle multimodal task distributions by conditioning the model on the current task, resulting in a faster adaptation. Our proposed method learns to encode each task and generate task embeddings that modulate the model's activations. The resulting modulated model become specialized for the current task and leads to more effective adaptation. Our framework is designed to work in a realistic setting where the mode from which a task is sampled is unknown. Nonetheless, we also explore the possibility of incorporating auxiliary information, such as the task-mode-label, to further enhance the performance of our method if such information is available. We evaluate our proposed framework on various few-shot regression and image classification tasks, demonstrating its superiority over other state-of-the-art meta-learning methods. The results highlight the benefits of learning to embed task-specific information in the model to guide the adaptation when tasks are sampled from a multimodal distribution.
引用
收藏
页码:8519 / 8529
页数:11
相关论文
共 50 条
  • [1] Multimodal meta-learning through meta-learned task representations
    Anna Vettoruzzo
    Mohamed-Rafik Bouguelia
    Thorsteinn Rögnvaldsson
    Neural Computing and Applications, 2024, 36 : 8519 - 8529
  • [2] Revisit Multimodal Meta-Learning through the Lens of Multi-Task Learning
    Abdollahzadeh, Milad
    Malekzadeh, Touba
    Cheung, Ngai-Man
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Leveraging enhanced task embeddings for generalization in multimodal meta-learning
    Rao, Shuzhen
    Huang, Jun
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (15): : 10765 - 10778
  • [4] Meta-learning of Textual Representations
    Madrid, Jorge G.
    Jair Escalante, Hugo
    Morales, Eduardo
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT I, 2020, 1167 : 57 - 67
  • [5] Learning Tensor Representations for Meta-Learning
    Deng, Samuel
    Guo, Yilin
    Hsu, Daniel
    Mandal, Debmalya
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [6] Meta-Learning Representations for Continual Learning
    Javed, Khurram
    White, Martha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [7] Learning to Learn Better Unimodal Representations via Adaptive Multimodal Meta-Learning
    Sun, Ya
    Mai, Sijie
    Hu, Haifeng
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 2209 - 2223
  • [8] Meta-Learning from Multimodal Task Distributions Using Multiple Sets of Meta-Parameters
    Vettoruzzo, Anna
    Bouguelia, Mohamed-Rafik
    Rognvaldsson, Thorsteinn
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [9] Conditional Meta-Learning of Linear Representations
    Denevi, Giulia
    Pontil, Massimiliano
    Ciliberto, Carlo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [10] Is human compositionality meta-learned?
    Russin, Jacob
    Mcgrath, Sam Whitman
    Pavlick, Ellie
    Frank, Michael J.
    BEHAVIORAL AND BRAIN SCIENCES, 2024, 47