Improving meta-learning model via meta-contrastive loss

被引:3
|
作者
Tian, Pinzhuo [1 ]
Gao, Yang [1 ]
机构
[1] Nanjing Univ, Dept Comp Sci & Technol, Nanjing 210023, Jiangsu, Peoples R China
关键词
meta-learning; few-shot learning; metaregularization; deep learning;
D O I
10.1007/s11704-021-1188-9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, addressing the few-shot learning issue with meta-learning framework achieves great success. As we know, regularization is a powerful technique and widely used to improve machine learning algorithms. However, rare research focuses on designing appropriate meta-regularizations to further improve the generalization of meta-learning models in few-shot learning. In this paper, we propose a novel meta-contrastive loss that can be regarded as a regularization to fill this gap. The motivation of our method depends on the thought that the limited data in few-shot learning is just a small part of data sampled from the whole data distribution, and could lead to various bias representations of the whole data because of the different sampling parts. Thus, the models trained by a few training data (support set) and test data (query set) might misalign in the model space, making the model learned on the support set can not generalize well on the query data. The proposed meta-contrastive loss is designed to align the models of support and query sets to overcome this problem. The performance of the meta-learning model in few-shot learning can be improved. Extensive experiments demonstrate that our method can improve the performance of different gradient-based meta-learning models in various learning problems, e.g., few-shot regression and classification.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Improving meta-learning model via meta-contrastive loss
    Pinzhuo TIAN
    Yang GAO
    [J]. Frontiers of Computer Science., 2022, 16 (05) - 112
  • [2] Improving meta-learning model via meta-contrastive loss
    Pinzhuo Tian
    Yang Gao
    [J]. Frontiers of Computer Science, 2022, 16
  • [3] A contrastive rule for meta-learning
    Zucchet, Nicolas
    Schug, Simon
    von Oswald, Johannes
    Zhao, Dominic
    Sacramento, Joao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [4] Improving progressive sampling via meta-learning
    Leite, R
    Brazdil, P
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE-B, 2003, 2902 : 313 - 323
  • [5] Feature Re-enhanced Meta-Contrastive Learning for Recommendation
    Li, Fangfei
    Chen, Wulin
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2024, 2024, 14887 : 260 - 271
  • [6] Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning
    Raymond, Christian
    Chen, Qi
    Xue, Bing
    Zhang, Mengjie
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 13699 - 13714
  • [7] Enhancing Model Agnostic Meta-Learning via Gradient Similarity Loss
    Tak, Jae-Ho
    Hong, Byung-Woo
    [J]. ELECTRONICS, 2024, 13 (03)
  • [8] Meta weight learning via model-agnostic meta-learning
    Xu, Zhixiong
    Chen, Xiliang
    Tang, Wei
    Lai, Jun
    Cao, Lei
    [J]. NEUROCOMPUTING, 2021, 432 : 124 - 132
  • [9] Improving progressive sampling via meta-learning on learning curves
    Leite, R
    Brazdil, P
    [J]. MACHINE LEARNING: ECML 2004, PROCEEDINGS, 2004, 3201 : 250 - 261
  • [10] Improving Generalization in Meta-learning via Task Augmentation
    Yao, Huaxiu
    Huang, Long-Kai
    Zhang, Linjun
    Wei, Ying
    Tian, Li
    Zou, James
    Huang, Junzhou
    Li, Zhenhui
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139