RankDNN: Learning to Rank for Few-Shot Learning

被引:0
|
作者
Guo, Qianyu [1 ,2 ]
Gong Haotong [1 ]
Wei, Xujun [1 ,3 ]
Fu, Yanwei [2 ]
Yu, Yizhou [4 ]
Zhang, Wenqiang [2 ,3 ]
Ge, Weifeng [1 ,2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Nebula AI Grp, Shanghai, Peoples R China
[2] Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Shanghai, Peoples R China
[4] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
KRONECKER PRODUCT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification. In comparison to image classification, ranking relation classification is sample efficient and domain agnostic. Besides, it provides a new perspective on few-shot learning and is complementary to state-of-the-art methods. The core component of our deep neural network is a simple MLP, which takes as input an image triplet encoded as the difference between two vector-Kronecker products, and outputs a binary relevance ranking order. The proposed RankMLP can be built on top of any state-of-the-art feature extractors, and our entire deep neural network is called the ranking deep neural network, or RankDNN. Meanwhile, RankDNN can be flexibly fused with other post-processing methods. During the meta test, RankDNN ranks support images according to their similarity with the query samples, and each query sample is assigned the class label of its nearest neighbor. Experiments demonstrate that RankDNN can effectively improve the performance of its baselines based on a variety of backbones and it outperforms previous state-of-the-art algorithms on multiple few-shot learning benchmarks, including miniImageNet, tieredImageNet, Caltech-UCSD Birds, and CIFAR-FS. Furthermore, experiments on the cross-domain challenge demonstrate the superior transferability of RankDNN.The code is available at: https://github.com/guoqianyu-alberta/RankDNN.
引用
收藏
页码:728 / 736
页数:9
相关论文
共 50 条
  • [1] Few-Shot Few-Shot Learning and the role of Spatial Attention
    Lifchitz, Yann
    Avrithis, Yannis
    Picard, Sylvaine
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2693 - 2700
  • [2] Survey on Few-shot Learning
    Zhao K.-L.
    Jin X.-L.
    Wang Y.-Z.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (02): : 349 - 369
  • [3] Variational Few-Shot Learning
    Zhang, Jian
    Zhao, Chenglong
    Ni, Bingbing
    Xu, Minghao
    Yang, Xiaokang
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 1685 - 1694
  • [4] Defensive Few-Shot Learning
    Li, Wenbin
    Wang, Lei
    Zhang, Xingxing
    Qi, Lei
    Huo, Jing
    Gao, Yang
    Luo, Jiebo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (05) : 5649 - 5667
  • [5] Federated Few-shot Learning
    Wang, Song
    Fu, Xingbo
    Ding, Kaize
    Chen, Chen
    Chen, Huiyuan
    Li, Jundong
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2374 - 2385
  • [6] Fractal Few-Shot Learning
    Zhou, Fobao
    Huang, Wenkai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (11) : 1 - 15
  • [7] Interventional Few-Shot Learning
    Yue, Zhongqi
    Zhang, Hanwang
    Sun, Qianru
    Hua, Xian-Sheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [8] Few-Shot Lifelong Learning
    Mazumder, Pratik
    Singh, Pravendra
    Rai, Piyush
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 2337 - 2345
  • [9] Learning about few-shot concept learning
    Ananya Rastogi
    Nature Computational Science, 2022, 2 : 698 - 698
  • [10] Co-Learning for Few-Shot Learning
    Xu, Rui
    Xing, Lei
    Shao, Shuai
    Liu, Baodi
    Zhang, Kai
    Liu, Weifeng
    NEURAL PROCESSING LETTERS, 2022, 54 (04) : 3339 - 3356