An Improved Algorithm for Detection and Pose Estimation of Texture-Less Objects

被引:2
|
作者
Peng, Jian [1 ,2 ]
Su, Ya [1 ,2 ]
机构
[1] China Univ Geosci, Sch Automat, 388 Lumo Rd, Wuhan 430074, Hubei, Peoples R China
[2] Hubei Key Lab Adv Control & Intelligent Automat C, 388 Lumo Rd, Wuhan 430074, Hubei, Peoples R China
关键词
computer vision; object detection and pose estimation; LineMOD algorithm;
D O I
10.20965/jaciii.2021.p0204
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces an improved algorithm for texture-less object detection and pose estimation in industrial scenes. In the template training stage, a multi-scale template training method is proposed to improve the sensitivity of LineMOD to template depth. When this method performs template matching, the test image is first divided into several regions, and then training templates with similar depth are selected according to the depth of each test image region. In this way, without traversing all the templates, the depth of the template used by the algorithm during template matching is kept close to the depth of the target object, which improves the speed of the algorithm while ensuring that the accuracy of recognition will not decrease. In addition, this paper also proposes a method called coarse positioning of objects. The method avoids a lot of useless matching operations, and further improves the speed of the algorithm. The experimental results show that the improved LineMOD algorithm in this paper can effectively solve the algorithm's template depth sensitivity problem.
引用
收藏
页码:204 / 212
页数:9
相关论文
共 50 条
  • [1] An Improved Approach for Model-based Detection and Pose Estimation of Texture-less Objects
    Zhang, Haoruo
    Cao, Yang
    Zhu, Xiaoxiao
    Fujie, Masakatsu G.
    Cao, Qixin
    [J]. 2016 IEEE WORKSHOP ON ADVANCED ROBOTICS AND ITS SOCIAL IMPACTS (ARSO), 2016, : 261 - 266
  • [2] Detection and fine 3D pose estimation of texture-less objects
    Peng, Jian
    Zhang, Yingbo
    Zhou, Shaojun
    [J]. 2019 INTERNATIONAL CONFERENCE ON IMAGE AND VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2019, 11321
  • [3] Model-based Pose Estimation for Texture-less Objects with Differential Evolution Algorithm
    Linh Tao
    Tinh Nguyen
    Hasegawa, Hiroshi
    [J]. 2017 INTERNATIONAL CONFERENCE ON MECHANICAL, AERONAUTICAL AND AUTOMOTIVE ENGINEERING (ICMAA 2017), 2017, 108
  • [4] A Hybrid Framework Based on Warped Hierarchical Tree for Pose Estimation of Texture-Less Objects
    Guo, Yongqi
    Wang, Jianlin
    Zhou, Xinjie
    Tan, Zhenguo
    Qiu, Kepeng
    [J]. IEEE ACCESS, 2020, 8 : 179813 - 179822
  • [5] Fast pose estimation for texture-less objects based on B-Rep model
    Jihua Wang
    Wei Yan
    [J]. EURASIP Journal on Image and Video Processing, 2018
  • [6] Pose Estimation of Texture-less Cylindrical Objects in Bin Picking using Sensor Fusion
    Roy, Mayank
    Boby, R. A.
    Chaudhary, Shraddha
    Chaudhury, Santanu
    Roy, S. Dulta
    Saha, S. K.
    [J]. 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 2279 - 2284
  • [7] Pose Estimation of Texture-Less Targets for Unconstrained Grasping
    Xu, Sixiong
    Gong, Pei
    Dong, Yanchao
    Gi, Lingling
    Huang, Cheng
    Wang, Sibiao
    [J]. ADVANCES IN VISUAL COMPUTING (ISVC 2021), PT I, 2021, 13017 : 466 - 477
  • [8] Fast pose estimation for texture-less objects based on B-Rep model
    Wang, Jihua
    Yan, Wei
    [J]. EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2018,
  • [9] An Improved Method for Model-Based Training, Detection and Pose Estimation of Texture-Less 3D Objects in Occlusion Scenes
    Zou, Dewei
    Cao, Qi
    Zhuang, Zilong
    Huang, Haozhe
    Gao, Ruize
    Qin, Wei
    [J]. 11TH CIRP CONFERENCE ON INDUSTRIAL PRODUCT-SERVICE SYSTEMS, 2019, 83 : 541 - 546
  • [10] Regression-Based Three-Dimensional Pose Estimation for Texture-Less Objects
    Liu, Yuanpeng
    Zhou, Laishui
    Zong, Hua
    Gong, Xiaoxi
    Wu, Qiaoyun
    Liang, Qingxiao
    Wang, Jun
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (11) : 2776 - 2789