Adversarial Task Up-sampling for Meta-learning

被引:0
|
作者
Wu, Yichen [1 ,2 ]
Huang, Long-Kai [2 ]
Wei, Ying [1 ]
机构
[1] City Univ Hong Kong, Hong Kong, Peoples R China
[2] Tencent AI Lab, Shenzhen, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks. Frequent violation of the assumption in applications with either insufficient tasks or a very narrow meta-training task distribution leads to memorization or learner overfitting. Recent solutions have pursued augmentation of meta-training tasks, while it is still an open question to generate both correct and sufficiently imaginary tasks. In this paper, we seek an approach that up-samples meta-training tasks from the task representation via a task up-sampling network. Besides, the resulting approach named Adversarial Task Up-sampling (ATU) suffices to generate tasks that can maximally contribute to the latest meta-learner by maximizing an adversarial loss. On few-shot sine regression and image classification datasets, we empirically validate the marked improvement of ATU over state-of-the-art task augmentation strategies in the meta-testing performance and also the quality of up-sampled tasks.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Towards well-generalizing meta-learning via adversarial task augmentation
    Wang, Haoqing
    Mai, Huiyu
    Gong, Yuhang
    Deng, Zhi-Hong
    [J]. ARTIFICIAL INTELLIGENCE, 2023, 317
  • [2] Image Up-Sampling for Super Resolution with Generative Adversarial Network
    Tsunekawa, Shohei
    Inoue, Katsufumi
    Yoshioka, Michifumi
    [J]. AI 2018: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, 11320 : 258 - 270
  • [3] Meta-Learning Adversarial Bandit Algorithms
    Khodak, Mikhail
    Osadchiy, Ilya
    Harris, Keegan
    Balcan, Maria-Florina
    Levy, Kfir Y.
    Meir, Ron
    Wu, Zhiwei Steven
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Towards Task Sampler Learning for Meta-Learning
    Wang, Jingyao
    Qiang, Wenwen
    Su, Xingzhe
    Zheng, Changwen
    Sun, Fuchun
    Xiong, Hui
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024,
  • [5] Leveraging Task Variability in Meta-learning
    Aimen A.
    Ladrecha B.
    Sidheekh S.
    Krishnan N.C.
    [J]. SN Computer Science, 4 (5)
  • [6] Meta-learning with an Adaptive Task Scheduler
    Yao, Huaxiu
    Wang, Yu
    Wei, Ying
    Zhao, Peilin
    Mahdavi, Mehrdad
    Lian, Defu
    Finn, Chelsea
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] Enhancing Fault Diagnosis in Industrial Processes through Adversarial Task Augmented Sequential Meta-Learning
    Sun, Dexin
    Fan, Yunsheng
    Wang, Guofeng
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (11):
  • [8] Improving progressive sampling via meta-learning
    Leite, R
    Brazdil, P
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE-B, 2003, 2902 : 313 - 323
  • [9] TASK2VEC: Task Embedding for Meta-Learning
    Achille, Alessandro
    Lam, Michael
    Tewari, Rahul
    Ravichandran, Avinash
    Maji, Subhransu
    Fowlkes, Charless
    Soatto, Stefano
    Perona, Pietro
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6439 - 6448
  • [10] Learning transferable targeted universal adversarial perturbations by sequential meta-learning
    Weng, Juanjuan
    Luo, Zhiming
    Lin, Dazhen
    Li, Shaozi
    [J]. COMPUTERS & SECURITY, 2024, 137