Comparative Evaluation of Hand-Crafted and Learned Local Features

被引:183
|
作者
Schonberger, Johannes L. [1 ]
Hardmeier, Hans [1 ]
Sattler, Torsten [1 ]
Pollefeys, Marc [1 ,2 ]
机构
[1] Swiss Fed Inst Technol, Dept Comp Sci, Zurich, Switzerland
[2] Microsoft Corp, Redmond, WA 98052 USA
基金
欧盟地平线“2020”;
关键词
D O I
10.1109/CVPR.2017.736
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Matching local image descriptors is a key step in many computer vision applications. For more than a decade, hand- crafted descriptors such as SIFT have been used for this task. Recently, multiple new descriptors learned from data have been proposed and shown to improve on SIFT in terms of discriminative power. This paper is dedicated to an extensive experimental evaluation of learned local features to establish a single evaluation protocol that ensures comparable results. In terms of matching performance, we evaluate the different descriptors regarding standard criteria. However, considering matching performance in isolation only provides an incomplete measure of a descriptor's quality. For example, finding additional correct matches between similar images does not necessarily lead to a better performance when trying to match images under extreme viewpoint or illumination changes. Besides pure descriptor matching, we thus also evaluate the different descriptors in the context of image-based reconstruction. This enables us to study the descriptor performance on a set of more practical criteria including image retrieval, the ability to register images under strong viewpoint and illumination changes, and the accuracy and completeness of the reconstructed cameras and scenes. To facilitate future research, the full evaluation pipeline is made publicly available.
引用
收藏
页码:6959 / 6968
页数:10
相关论文
共 50 条
  • [21] Classification of radiolarian images with hand-crafted and deep features
    Keceli, Ali Seydi
    Kaya, Aydon
    Keceli, Seda Uzuncimen
    COMPUTERS & GEOSCIENCES, 2017, 109 : 67 - 74
  • [22] Fusion of Hand-crafted and Deep Features for Empathy Prediction
    Hinduja, Saurabh
    Uddin, Md Taufeeq
    Jannat, Sk Rahatul
    Sharma, Astha
    Canavan, Shaun
    2019 14TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2019), 2019, : 709 - 712
  • [23] DEEP SALIENCY MAP ESTIMATION OF HAND-CRAFTED FEATURES
    Jin, Guoqing
    Shen, Shiwei
    Zhang, Dongming
    Duan, Wenjing
    Zhang, Yongdong
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 4262 - 4266
  • [24] Agent's activity recognition: a focus on comparison of automatically-learned and hand-crafted features
    Wang, Yan
    Chen, Yuguo
    Yu, Hongnian
    2019 INTERNATIONAL CONFERENCE ON ADVANCED MECHATRONIC SYSTEMS (ICAMECHS), 2019, : 241 - 244
  • [25] Hand-crafted versus learned representations for audio event detection
    Selver Ezgi Küçükbay
    Adnan Yazıcı
    Sinan Kalkan
    Multimedia Tools and Applications, 2022, 81 : 30911 - 30930
  • [26] Combining CNN with Hand-Crafted Features for Image Classification
    Zhou Tianyu
    Miao Zhenjiang
    Zhang Jianhu
    PROCEEDINGS OF 2018 14TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP), 2018, : 554 - 557
  • [27] Towards adept hand-crafted features for ocular biometrics
    Vyas, Ritesh
    2020 8TH INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF 2020), 2020,
  • [28] Fusing Deep Learned and Hand-Crafted Features of Appearance, Shape, and Dynamics for Automatic Pain Estimation
    Egede, Joy
    Valstar, Michel
    Martinez, Brais
    2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, : 689 - 696
  • [29] Feature Fusion: H-ELM based Learned Features and Hand-Crafted Features for Human Activity Recognition
    AlDahoul, Nouar
    Akmeliawati, Rini
    Htike, Zaw Zaw
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2019, 10 (07) : 509 - 514
  • [30] Editorial Note: Role of Hand-crafted and Learned Representations for Multimedia Applications
    Multimedia Tools and Applications, 2018, 77 : 4827 - 4827