Is Multi-model Feature Matching Better for Endoscopic Motion Estimation?

被引:1
|
作者
Xiang, Xiang [1 ]
Mirota, Daniel [1 ]
Reiter, Austin [1 ]
Hager, Gregory D. [1 ]
机构
[1] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21218 USA
基金
美国国家卫生研究院;
关键词
All Open Access; Green;
D O I
10.1007/978-3-319-13410-9_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Camera motion estimation is a standard yet critical step to endoscopic visualization. It is affected by the variation of locations and correspondences of features detected in 2D images. Feature detectors and descriptors vary, though one of the most widely used remains SIFT. Practitioners usually also adopt its feature matching strategy, which defines inliers as the feature pairs subjecting to a global affine transformation. However, for endoscopic videos, we are curious if it is more suitable to cluster features into multiple groups. We can still enforce the same transformation as in SIFT within each group. Such a multi-model idea has been recently examined in the Multi-Affine work, which outperforms Lowe's SIFT in terms of re-projection error on minimally invasive endoscopic images with manually labelled ground-truth matches of SIFT features. Since their difference lies in matching, the accuracy gain of estimated motion is attributed to the holistic Multi-Affine feature matching algorithm. But, more concretely, the matching criterion and point searching can be the same as those built in SIFT. We argue that the real variation is only the motion model verification. We either enforce a single global motion model or employ a group of multiple local ones. In this paper, we investigate how sensitive the estimated motion is affected by the number of motion models assumed in feature matching. While the sensitivity can be analytically evaluated, we present an empirical analysis in a leaving-one-out cross validation setting without requiring labels of ground-truth matches. Then, the sensitivity is characterized by the variance of a sequence of motion estimates. We present a series of quantitative comparison such as accuracy and variance between Multi-Affine motion models and the global affine model.
引用
收藏
页码:88 / 98
页数:11
相关论文
共 50 条
  • [1] Multivariable uncertainty estimation based on multi-model output matching
    Böling, JM
    Häggblom, KE
    Nyström, RH
    JOURNAL OF PROCESS CONTROL, 2004, 14 (03) : 293 - 304
  • [2] Estimation of Motion State of Aircrafts Based on Simple Multi-Model
    Xia Yan
    Jiang Tao
    Xu Hao
    Chen Weidong
    ICSP: 2008 9TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, VOLS 1-5, PROCEEDINGS, 2008, : 2914 - 2917
  • [3] A Multi-model Estimation of Distribution Algorithm
    Hao, Li
    2018 INTERNATIONAL SYMPOSIUM ON POWER ELECTRONICS AND CONTROL ENGINEERING (ISPECE 2018), 2019, 1187
  • [4] Implementing a multi-model estimation method
    Vieville, T
    Lingrand, D
    Gaspard, F
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2001, 44 (01) : 41 - 64
  • [5] Implementing a Multi-Model Estimation Method
    T. Vieville
    D. Lingrand
    F. Gaspard
    International Journal of Computer Vision, 2001, 44 : 41 - 64
  • [6] Feature Decoupling for Multimodal Locomotion and Estimation of Knee and Ankle Angles Implemented by Multi-Model Fusion
    Yu, Xisheng
    Pei, Zeguang
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2024, 32 : 2281 - 2292
  • [7] Matching criterion function for block motion estimation: Block feature matching function
    Luo, LJ
    Zou, CR
    Gao, XQ
    He, ZY
    ELECTRONICS LETTERS, 1997, 33 (11) : 929 - 931
  • [8] FTBME: feature transferring based multi-model ensemble
    Yang, A. Yongquan
    Lv, B. Haijun
    Chen, C. Ning
    Wu, D. Yang
    Zheng, E. Zhongxi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (25-26) : 18767 - 18799
  • [9] Fast Fusion Moves for Multi-model Estimation
    Delong, Andrew
    Veksler, Olga
    Boykov, Yuri
    COMPUTER VISION - ECCV 2012, PT I, 2012, 7572 : 370 - 384
  • [10] FTBME: feature transferring based multi-model ensemble
    A. Yongquan Yang
    B. Haijun Lv
    C. Ning Chen
    D. Yang Wu
    E. Zhongxi Zheng
    Multimedia Tools and Applications, 2020, 79 : 18767 - 18799