A marker-less assembly stage recognition method based on segmented projection contour

被引:12
|
作者
Pang, Jiazhen [1 ]
Zhang, Jie [1 ]
Li, Yuan [1 ]
Sun, Wei [1 ]
机构
[1] Northwestern Polytech Univ, Sch Mech Engn, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
Assembly stage recognition; CAD model; Segmented projection; Hausdorff distance; Contour registration; AUGMENTED-REALITY; TRACKING; ROBOT; SYSTEM;
D O I
10.1016/j.aei.2020.101149
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In man-machine cooperative assembly, assembly recognition that determines the current manual working stage is key information to driving automatic computer-aided assistance. Focused on three features of the assembly scene-movable view, process stage, and CAD model template- a view-free and marker-less assembly stage recognition method is proposed in this paper. By constructing the semantic model for the assembly scene and the stage model for CAD parts, a depth image of assembly and a CAD model can be extracted as point clouds. Then we propose the segmented projection contour descriptor to uniformly express the shape information as a series of contours, so the 3D registration issue is converted to a 2D registration issue. The vertex-to-edge Hausdorff distance is proposed in the partial registration to determine the transformation matrix for each pair of contours. Finally, the overall matching algorithm based on the overlay ratio is given, and the best matching stage model indicates the current assembly stage. The recognition and classification experiments are carried out to verify the proposed method. A comparison with traditional Hausdorff distance proves the proposed algorithm performs better in stage recognition. Our study reveals that the proposed view-free and marker-less method can solve the stage recognition issue based on the assembly's depth image, so as to connect the on-site assembly with the digital information.
引用
收藏
页数:13
相关论文
共 50 条
  • [11] Marker-less registration based on template tracking for augmented reality
    Liang Lin
    Yongtian Wang
    Yue Liu
    Caiming Xiong
    Kun Zeng
    Multimedia Tools and Applications, 2009, 41 : 235 - 252
  • [12] Marker-less registration based on template tracking for augmented reality
    Lin, Liang
    Wang, Yongtian
    Liu, Yue
    Xiong, Caiming
    Zeng, Kun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2009, 41 (02) : 235 - 252
  • [13] Marker-less AR system based on line segment feature
    Nakayama, Yusuke
    Saito, Hideo
    Shimizu, Masayoshi
    Yamaguchi, Nobuyasu
    ENGINEERING REALITY OF VIRTUAL REALITY 2015, 2015, 9392
  • [14] Marker-less vision based tracking for mobile augmented reality
    Beier, D
    Billert, R
    Brüderlin, B
    Stichling, D
    Kleinjohann, B
    SECOND IEEE AND ACM INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, PROCEEDINGS, 2003, : 258 - 259
  • [15] Marker-less tracking for AR: A learning-based approach
    Genc, Y
    Riedel, S
    Souvannavong, F
    Akinlar, C
    Navab, N
    INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, PROCEEDINGS, 2002, : 295 - 304
  • [16] Reproduction of a human motion based on marker-less motion capture
    Yonemoto, Satoshi
    Arita, Daisaku
    Taniguchi, Rin-Ichiro
    Research Reports on Information Science and Electrical Engineering of Kyushu University, 2000, 5 (01): : 75 - 80
  • [17] Merging artificial effects with marker-less video sequences based on the interacting multiple model method
    Yu, Ying Kin
    Wong, Kin Hong
    Chang, Michael Ming Yuen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2006, 8 (03) : 521 - 528
  • [18] Mixed Marker-Based/Marker-Less Visual Odometry System for Mobile Robots
    Lamberti, Fabrizio
    Sanna, Andrea
    Paravati, Gianluca
    Montuschi, Paolo
    Gatteschi, Valentina
    Demartini, Claudio
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2013, 10
  • [19] Comparison of marker-less and marker-based motion capture for baseball pitching kinematics
    Fleisig, Glenn S.
    Slowik, Jonathan S.
    Wassom, Derek
    Yanagita, Yuki
    Bishop, Jasper
    Diffendaffer, Alek
    SPORTS BIOMECHANICS, 2024, 23 (12) : 2950 - 2959