Visual Positioning Technology of Assembly Robot Workpiece Based on Prediction of Key Points

被引:0
|
作者
Ni T. [1 ]
Zhang P. [1 ]
Li W. [2 ]
Zhao Y. [1 ]
Zhang H. [2 ]
Zhai H. [1 ]
机构
[1] College of Vehicle and Energy, Yanshan University, Qinhuangdao
[2] College of Mechanical and Aerospace Engineering, Jilin University, Changchun
关键词
Assembly robot; Deep learning; Key points; Pose of workpiece; Prediction;
D O I
10.6041/j.issn.1000-1298.2022.06.047
中图分类号
学科分类号
摘要
Aiming at the problem that the current manual feature detection of assembly robots was susceptible to interference factors such as illumination conditions, background and occlusion, and the feature detection based on point cloud depends on the accuracy of model construction, the method of deep learning was proposed to carry out research on the visual positioning technology of the workpiece based on key point prediction. Firstly, the ArUco pose detection marker and ICP point cloud registration technology were used to construct a set of data for training the pose estimation network model. The depth images from various angles of the workpiece were collected, and the pose information of the workpiece was calculated. The key points on the workpiece surface were selected as the data set. Then the vector field of the key points on the workpiece surface was constructed, and the depth training was carried out to gather with the data set to realize the vector field prediction of the foreground points pointing to the key points. And the direction vectors of each pixel in the vector field pointing to the same key point were divided into two groups, the intersection points of their vectors were taken to generate the hypothesis of the key point, and all the hypotheses were evaluated based on RANSAC voting. The EPnP solver was used to calculate the pose of the workpiece, and the orientation bounding box of the workpiece was generated to display the pose estimation results. Finally, the accuracy and robustness of the estimation results were verified by experiments. © 2022, Chinese Society of Agricultural Machinery. All right reserved.
引用
收藏
页码:443 / 450
页数:7
相关论文
共 28 条
  • [1] SHANG Yang, SUN Xiaoliang, ZHANG Yueqiang, Et al., 3D target pose tracking and model modification, Journal of Surveying and Mapping, 47, 6, pp. 799-808, (2018)
  • [2] ZHANG Xu, Application of Fanuc robot iRvision system in painting production line, Modern Coatings and Finishes, 22, 1, pp. 75-77, (2019)
  • [3] FANG Xiangen, Research on workpiece recognition and pose estimation for industrial robot spraying, (2018)
  • [4] JIANG Detao, LU Naiguang, TAN Qimeng, Sparse point cloud registration method for pose estimation of large workpieces, Journal of Beijing Information Science and Technology University (Natural Science Edition), 27, 1, pp. 89-94, (2012)
  • [5] DROST B, ULRICH M, NAVAB N, Et al., Model globally, match locally: efficient and robust 3D object recognition, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2010)
  • [6] HINTERSTOISSER S, LEPETIT V, RAJKUMAR N, Et al., Going further with point pair features, European Conference on Computer Vision, (2016)
  • [7] ZHANG Shaojie, MA Yinzhong, ZHAO Haifeng, Error matching point pair elimination algorithm based on point cloud geometry, Computer Engineering, 45, 4, pp. 163-168, (2019)
  • [8] LI D, WANG H, LIU N, Et al., 3D object recognition and pose estimation from point cloud using stably observed point pair feature[J], IEEE Access, 8, pp. 44335-44345, (2020)
  • [9] HODAN T, MICHEL F, BRACHMANN E, Et al., BOP: benchmark for 6D object pose estimation, European Conference on Computer Vision, (2018)
  • [10] HINTERSTOISSER S, LEPETIT V, ILIC S, Et al., Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes, Asian Conference on Computer Vision, (2012)