Fast and hierarchical K-d tree based stereo image matching method

被引:0
|
作者
Zhang G.-A. [1 ]
Yuan Z.-Y. [1 ]
Tong Q.-Q. [1 ]
Liao X.-Y. [1 ]
机构
[1] School of Computer, Wuhan University, Wuhan
来源
Yuan, Zhi-Yong (zhiyongyuan@whu.edu.cn) | 1600年 / Chinese Academy of Sciences卷 / 27期
基金
中国国家自然科学基金;
关键词
ACIO (approximately consistent in orientation); HKD-tree; SIFT (scale-invariant feature transformation); SPI (stereo pairwise image);
D O I
10.13328/j.cnki.jos.005090
中图分类号
学科分类号
摘要
Feature Matching has long been the basis and a central topic in the field of computer vision and image processing. SIFT (scale-invariant feature transformation, by David G. Lowe), due to its advantages of invariance to image scale and rotation, and robustness to a substantial range of affine distortion and change in viewpoint, has been attracting the attention of many domestic and foreign researchers over a decade. Rapidity and accuracy are very crucial for stereo pairwise image matching in applications such as 3D reconstruction. First, in order to accelerate the speed and promote the accuracy of matching, this paper proposes a novel method based on SIFT called approximately consistent in orientation (ACIO), which depicts the spatial location relationship of two matched vectors between stereo pairwise images (SPI), and therefore improves the accuracy of matching efficiently by avoiding the wrong correspondences. Secondly, this paper analyzes the structure of standard K-d tree (SKD-tree) and proposes a new one with hierarchical structure, named HKD-tree, which partitions the feature sets of SPI into stripes in terms of ACIO constraint and builds map between them. By reducing the search space, the matching speed increases greatly. Thirdly this work presents an efficient and fast matching algorithm based on ACIO and HKD-tree. Extensive trials based on a benchmark data set show that the proposed approach outperforms the state-of-the-art methods in matching speed with slight promotion in accuracy. Particulary, it is one order of magnitude faster than SKD-tree and also several times against the recent CasHash method. © Copyright 2016, Institute of Software, the Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:2462 / 2472
页数:10
相关论文
共 41 条
  • [1] Cheng J., Leng C., Wu J.X., Cui H.N., Lu H.Q., Fast and accurate image matching with cascade hashing for 3d reconstruction, Proc. of the 2014 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1-8, (2014)
  • [2] Dalal N., Triggs B., Histograms of oriented gradients for human detection, Proc. of the 2005 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 886-893, (2005)
  • [3] Lowe D.G., Object recognition from local scale-invariant features, Proc. of the 7th IEEE Int'l Conf. on Computer Vision, 2, pp. 1150-1157, (1999)
  • [4] Lowe D.G., Distinctive image features from scale-invariant keypoints, Int'l Journal of Computer Vision, 60, 2, pp. 91-110, (2004)
  • [5] Snavely N., Seitz S.M., Szeliski R., Photo tourism: Exploring photo collections in 3D, ACM Trans. on Graphics (TOG), 25, 3, pp. 835-846, (2006)
  • [6] Philbin J., Chum O., Isard M., Sivic J., Zisserman A., Object retrieval with large vocabularies and fast spatial matching, Proc. of the 2007 IEEE Conf on Computer Vision and Pattern Recognition, pp. 1-8, (2007)
  • [7] Mikolajczyk K., Leibe B., Schiele B., Multiple object class detection with a generative model, Proc. of the 2006 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1, pp. 26-36, (2006)
  • [8] Ferrari V., Tuytelaars T., Van Gool L., Simultaneous object recognition and segmentation by image exploration, Proc. of the Computer Vision-ECCV 2004, pp. 40-54, (2004)
  • [9] Arth C., Leistner C., Bischof H., Robust local features and their application in self-calibration and object recognition on embedded systems, Proc. of the 2007 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1-8, (2007)
  • [10] Li F.F., Perona P., A Bayesian hierarchical model for learning natural scene categories, Proc. of the 2005 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 524-531, (2005)