Matching and Pose Estimation of Noisy, Partial and Planar B-Rep Models

被引:0
|
作者
Sand, Maximilian [1 ]
Henrich, Dominik [1 ]
机构
[1] Univ Bayreuth, Univ Str 30, D-95440 Bayreuth, Germany
关键词
Registration; Pose Estimation; Boundary Representation Model; Shape Matching;
D O I
10.1145/3095140.3095170
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Three dimensional models are represented in various ways. One possibility are boundary representation (B-Rep) models, which contain geometric and topological information. This makes B-Reps suitable for tasks that need an explicit algebraic representation of the surface, e.g. numerical optimization or simulation. Reconstructing a B-Rep model of an object or the environment often is a tedious task needing much manual intervention. An intuitive way is to use a hand-held depth camera and perform a real-time reconstruction. In the domain of robotics, we previously presented a system [18] that is able to incrementally reconstruct a planar B-Rep model from a stream of organized point clouds acquired from a robot mounted camera. Since the acquisition poses are known, this approach can not directly be used in a setup with a hand-held camera. The contribution of this work is a new method for matching planar B-Rep models and estimating their relative pose. In particular, the input models can be noisy and incomplete containing very few geometric features like corners, which is likely to occur when a model from only one viewpoint is processed. In combination with our previous work, we show that our approach can be used to build a simultaneous location and mapping (SLAM) system for an easy reconstruction of planar B-Rep models.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] B-rep model simplification using selective and iterative volume decomposition to obtain finer multi-resolution models
    Kwon, Soonjo
    Mun, Duhwan
    Kim, Byung Chul
    Han, Soonhung
    Suh, Hyo-Won
    COMPUTER-AIDED DESIGN, 2019, 112 : 23 - 34
  • [42] User-assisted integrated method for controlling level of detail of large-scale B-rep assembly models
    Kwon, Soonjo
    Kim, Byung Chul
    Mun, Duhwan
    Han, Soonhung
    INTERNATIONAL JOURNAL OF COMPUTER INTEGRATED MANUFACTURING, 2018, 31 (09) : 881 - 892
  • [43] Implementing Metric Operators of a Spatial Query Language for 3D Building Models: Octree and B-Rep Approaches
    Borrmann, Andre
    Schraufstetter, Stefanie
    Rank, Ernst
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2009, 23 (01) : 34 - 46
  • [44] Object Pose Estimation via Viewpoint Matching of 3D Models
    Lee, Junha
    Ji, Sanghoon
    You, Sujeong
    2021 21ST INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2021), 2021, : 1546 - 1548
  • [45] Paper eCAD-Net: Editable Parametric CAD Models Reconstruction from Dumb B-Rep Models Using Deep Neural Networks
    Zhang, Chao
    Polette, Arnaud
    Pinquie, Romain
    Carasi, Gregorio
    De Charnace, Henri
    Pernot, Jean-Philippe
    COMPUTER-AIDED DESIGN, 2025, 178
  • [46] Conforming embedded isogeometric analysis for B-Rep CAD models with strong imposition of Dirichlet boundary conditions using trivariate B plus plus splines
    Zhu, Xuefeng
    Ren, Guangwu
    Zhang, Xiangkui
    Yang, Chunhui
    Xi, An
    Hu, Ping
    Ma, Zheng-Dong
    COMPUTERS & STRUCTURES, 2024, 305
  • [47] An approach to recognize interacting features from B-Rep CAD models of prismatic machined parts using a hybrid (graph and rule based) technique
    Sunil, V. B.
    Agarwal, Rupal
    Pande, S. S.
    COMPUTERS IN INDUSTRY, 2010, 61 (07) : 686 - 701
  • [48] 3D pose estimation by directly matching polyhedral models to gray value gradients
    Universitaet Karlsruhe , Karlsruhe, Germany
    Int J Comput Vision, 3 (283-302):
  • [49] 3D Pose Estimation by Directly Matching Polyhedral Models to Gray Value Gradients
    Henner Kollnig
    Hans-Hellmut Nagel
    International Journal of Computer Vision, 1997, 23 : 283 - 302
  • [50] 3D pose estimation by directly matching polyhedral models to gray value gradients
    Kollnig, H
    Nagel, HM
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 1997, 23 (03) : 283 - 302