Learning Instance and Task-Aware Dynamic Kernels for Few-Shot Learning

被引:3
|
作者
Ma, Rongkai [1 ]
Fang, Pengfei [1 ,2 ,3 ]
Avraham, Gil [4 ]
Zuo, Yan [3 ]
Zhu, Tianyu [1 ]
Drummond, Tom [5 ]
Harandi, Mehrtash [1 ,3 ]
机构
[1] Monash Univ, Melbourne, Vic, Australia
[2] Australian Natl Univ, Canberra, ACT, Australia
[3] CSIRO, Canberra, ACT, Australia
[4] Amazon Australia, Melbourne, Vic, Australia
[5] Univ Melbourne, Melbourne, Vic, Australia
来源
关键词
D O I
10.1007/978-3-031-20044-1_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning and generalizing to novel concepts with few samples (Few-Shot Learning) is still an essential challenge to real-world applications. A principle way of achieving few-shot learning is to realize a model that can rapidly adapt to the context of a given task. Dynamic networks have been shown capable of learning content-adaptive parameters efficiently, making them suitable for few-shot learning. In this paper, we propose to learn the dynamic kernels of a convolution network as a function of the task at hand, enabling faster generalization. To this end, we obtain our dynamic kernels based on the entire task and each sample, and develop a mechanism further conditioning on each individual channel and position independently. This results in dynamic kernels that simultaneously attend to the global information whilst also considering minuscule details available. We empirically show that our model improves performance on few-shot classification and detection tasks, achieving a tangible improvement over several baseline models. This includes state-of-the-art results on four few-shot classification benchmarks: mini-ImageNet, tiered-ImageNet, CUB and FC100 and competitive results on a few-shot detection dataset: MS COCO-PASCAL-VOC.
引用
收藏
页码:257 / 274
页数:18
相关论文
共 50 条
  • [11] TASK-AWARE FEW-SHOT VISUAL CLASSIFICATION WITH IMPROVED SELF-SUPERVISED METRIC LEARNING
    Cheng, Chia-Sheng
    Shao, Hao-Chiang
    Lin, Chia-Wen
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3531 - 3535
  • [12] Task-Aware Feature Composition for Few-Shot Relation Classification
    Deng, Sinuo
    Shi, Ge
    Feng, Chong
    Wang, Yashen
    Liao, Lejian
    APPLIED SCIENCES-BASEL, 2022, 12 (07):
  • [13] TAAN: Task-Aware Attention Network for Few-shot Classification
    Wang, Zhe
    Liu, Li
    Li, FanZhang
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9130 - 9136
  • [14] Few-shot Image Classification Based on Task-Aware Relation Network
    Guo L.
    Wang G.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (03): : 977 - 985
  • [15] TAGM: Task-Aware Graph Model for Few-shot Node Classification
    Zhao, Feng
    Zhang, Min
    Huang, Tiancheng
    Wang, Donglin
    PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 462 - 471
  • [16] LTAF-NET: LEARNING TASK-AWARE ADAPTIVE FEATURES AND REFINING MASK FOR FEW-SHOT SEMANTIC SEGMENTATION
    Mao, Binjie
    Wang, Lingfeng
    Xiang, Shiming
    Pan, Chunhong
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2320 - 2324
  • [17] Dynamic concept-aware network for few-shot learning
    Zhou, Jun
    Lv, Qiujie
    Chen, Calvin Yu-Chian
    KNOWLEDGE-BASED SYSTEMS, 2022, 258
  • [18] Task-Aware Dual-Representation Network for Few-Shot Action Recognition
    Wang, Xiao
    Ye, Weirong
    Qi, Zhongang
    Wang, Guangge
    Wu, Jianping
    Shan, Ying
    Qie, Xiaohu
    Wang, Hanzi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (10) : 5932 - 5946
  • [19] Ta-Adapter: Enhancing few-shot CLIP with task-aware encoders
    Zhang, Wenbo
    Zhang, Yifan
    Deng, Yuyang
    Zhang, Wenlong
    Lin, Jianfeng
    Huang, Binqiang
    Zhang, Jinlu
    Yu, Wenhao
    PATTERN RECOGNITION, 2024, 153
  • [20] Task Agnostic Meta-Learning for Few-Shot Learning
    Jamal, Muhammad Abdullah
    Qi, Guo-Jun
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11711 - 11719