RankDNN: Learning to Rank for Few-Shot Learning

被引:0
|
作者
Guo, Qianyu [1 ,2 ]
Gong Haotong [1 ]
Wei, Xujun [1 ,3 ]
Fu, Yanwei [2 ]
Yu, Yizhou [4 ]
Zhang, Wenqiang [2 ,3 ]
Ge, Weifeng [1 ,2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Nebula AI Grp, Shanghai, Peoples R China
[2] Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Shanghai, Peoples R China
[4] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
KRONECKER PRODUCT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification. In comparison to image classification, ranking relation classification is sample efficient and domain agnostic. Besides, it provides a new perspective on few-shot learning and is complementary to state-of-the-art methods. The core component of our deep neural network is a simple MLP, which takes as input an image triplet encoded as the difference between two vector-Kronecker products, and outputs a binary relevance ranking order. The proposed RankMLP can be built on top of any state-of-the-art feature extractors, and our entire deep neural network is called the ranking deep neural network, or RankDNN. Meanwhile, RankDNN can be flexibly fused with other post-processing methods. During the meta test, RankDNN ranks support images according to their similarity with the query samples, and each query sample is assigned the class label of its nearest neighbor. Experiments demonstrate that RankDNN can effectively improve the performance of its baselines based on a variety of backbones and it outperforms previous state-of-the-art algorithms on multiple few-shot learning benchmarks, including miniImageNet, tieredImageNet, Caltech-UCSD Birds, and CIFAR-FS. Furthermore, experiments on the cross-domain challenge demonstrate the superior transferability of RankDNN.The code is available at: https://github.com/guoqianyu-alberta/RankDNN.
引用
收藏
页码:728 / 736
页数:9
相关论文
共 50 条
  • [21] Few-Shot Learning for Defence and Security
    Robinson, Todd
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [22] Explore pretraining for few-shot learning
    Yan Li
    Jinjie Huang
    Multimedia Tools and Applications, 2024, 83 : 4691 - 4702
  • [23] Few-Shot Classification with Contrastive Learning
    Yang, Zhanyuan
    Wang, Jinghua
    Zhu, Yingying
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 293 - 309
  • [24] Personalized Federated Few-Shot Learning
    Zhao, Yunfeng
    Yu, Guoxian
    Wang, Jun
    Domeniconi, Carlotta
    Guo, Maozu
    Zhang, Xiangliang
    Cui, Lizhen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2534 - 2544
  • [25] Few-shot learning for ear recognition
    Zhang, Jie
    Yu, Wen
    Yang, Xudong
    Deng, Fang
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO AND SIGNAL PROCESSING (IVSP 2019), 2019, : 50 - 54
  • [26] A Feature Generator for Few-Shot Learning
    Kanagalingam, Heethanjan
    Pathmanathan, Thenukan
    Ketheeswaran, Navaneethan
    Vathanakumar, Mokeeshan
    Afham, Mohamed
    Rodrigo, Ranga
    arXiv,
  • [27] Few-Shot Learning for Image Denoising
    Jiang, Bo
    Lu, Yao
    Zhang, Bob
    Lu, Guangming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4741 - 4753
  • [28] Adaptive Subspaces for Few-Shot Learning
    Simon, Christian
    Koniusz, Piotr
    Nock, Richard
    Harandi, Mehrtash
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4135 - 4144
  • [29] Few-shot Learning with Noisy Labels
    Liang, Kevin J.
    Rangrej, Samrudhdhi B.
    Petrovic, Vladan
    Hassner, Tal
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9079 - 9088
  • [30] Exploring Quantization in Few-Shot Learning
    Wang, Meiqi
    Xue, Ruixin
    Lin, Jun
    Wang, Zhongfeng
    2020 18TH IEEE INTERNATIONAL NEW CIRCUITS AND SYSTEMS CONFERENCE (NEWCAS'20), 2020, : 279 - 282