Learning a metric for class-conditional KNN

被引:0
|
作者
Im, Daniel Jiwoong [1 ]
Taylor, Graham W. [2 ]
机构
[1] HHMI, Janelia Res Campus, Ashburn, VA 20147 USA
[2] Univ Guelph, Sch Engn, Guelph, ON, Canada
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Naive Bayes Nearest Neighbour (NBNN) is a simple and effective framework which addresses many of the pitfalls of K-Nearest Neighbour (KNN) classification. It has yielded competitive results on several computer vision benchmarks. Its central tenet is that during NN search, a query is not compared to every example in a database, ignoring class information. Instead, NN searches are performed within each class, generating a score per class. A key problem with NN techniques, including NBNN, is that they fail when the data representation does not capture perceptual (e.g. class-based) similarity. NBNN circumvents this by using independent engineered descriptors (e.g. SIFT). To extend its applicability outside of image-based domains, we propose to learn a metric which captures perceptual similarity. Similar to how Neighbourhood Components Analysis optimizes a differentiable form of KNN classification, we propose "Class Conditional" metric learning (CCML), which optimizes a soft form of the NBNN selection rule. Typical metric learning algorithms learn either a global or local metric. However, our proposed method can be adjusted to a particular level of locality by tuning a single parameter. An empirical evaluation on classification and retrieval tasks demonstrates that our proposed method clearly outperforms existing learned distance metrics across a variety of image and non-image datasets.
引用
收藏
页码:1932 / 1939
页数:8
相关论文
共 50 条
  • [1] TRANSDUCTIVE CLIP WITH CLASS-CONDITIONAL CONTRASTIVE LEARNING
    Huang, Junchu
    Chen, Weijie
    Yang, Shicai
    Xie, Di
    Pu, Shiliang
    Zhuang, Yueting
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3858 - 3862
  • [2] Learning Class-Conditional GANs with Active Sampling
    Xie, Ming-Kun
    Huang, Sheng-Jun
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 998 - 1006
  • [3] Advocacy Learning: Learning through Competition and Class-Conditional Representations
    Fox, Ian
    Wiens, Jenna
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2315 - 2321
  • [4] Naive and Robust: Class-Conditional Independence in Human Classification Learning
    Jarecki, Jana B.
    Meder, Bjoern
    Nelson, Jonathan D.
    [J]. COGNITIVE SCIENCE, 2018, 42 (01) : 4 - 42
  • [5] Class-conditional Importance Weighting for Deep Learning with Noisy Labels
    Nagarajan, Bhalaji
    Marques, Ricardo
    Mejia, Marcos
    Radeva, Petia
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2022, : 679 - 686
  • [6] Unsupervised Keypoint Learning for Guiding Class-Conditional Video Prediction
    Kim, Yunji
    Nam, Seonghyeon
    Cho, In
    Kim, Seon Joo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [7] Latent Class-Conditional Noise Model
    Yao, Jiangchao
    Han, Bo
    Zhou, Zhihan
    Zhang, Ya
    Tsang, Ivor W.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 9964 - 9980
  • [8] Compact class-conditional domain invariant learning for multi-class domain adaptation
    Lee, Woojin
    Kim, Hoki
    Lee, Jaewook
    [J]. PATTERN RECOGNITION, 2021, 112
  • [9] Class-conditional domain adaptation for semantic segmentation
    Wang, Yue
    Li, Yuke
    Elder, James H.
    Wu, Runmin
    Lu, Huchuan
    [J]. COMPUTATIONAL VISUAL MEDIA, 2024, 10 (03) : 425 - 438
  • [10] Class-Conditional Label Noise in Astroparticle Physics
    Bunse, Mirko
    Pfahler, Lukas
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE AND DEMO TRACK, ECML PKDD 2023, PT VI, 2023, 14174 : 19 - 35