Study of 3D vision guidance strategy for robots with sprayed parts hanging down

被引:0
|
作者
Zhang L. [1 ]
Lai H. [1 ]
Yang Y. [1 ]
Li S. [1 ]
Zhu X. [1 ]
机构
[1] College of Mechanical Engineering, Ningxia University, Yinchuan
关键词
3D vision guidance; deep learning; pose estimation; robot control; spraying lines;
D O I
10.19650/j.cnki.cjsi.2311149
中图分类号
学科分类号
摘要
A control strategy based on 3D vision to guide the robot for spraying parts hanging is proposed because the hanging chain transporting spraying parts is difficult to be strictly positioned, which results in its low hanging efficiency. The MASK of the sprayed part is obtained by training and inference through the instance segmentation network Mask-RCNN, and the color and depth maps are obtained after pixel alignment and instance segmentation. After the hand-eye calibration of the robot vision system, the robot joints are further designed to move smoothly by means of five polynomial interpolation, and the robot is controlled and guided to hang the sprayed parts. The experimental results show that the average error of the position angle of the sprayed parts is not more than 10°, and the average error value is 6. 83 mm in the Z-movement direction, with a minimum of 0. 02 mm, and the robot can be guided to realize the autonomous hanging of the sprayed parts in the simulation environment and on-site environment. © 2023 Science Press. All rights reserved.
引用
收藏
页码:35 / 42
页数:7
相关论文
共 20 条
  • [1] LIU Y J, ZI B, WANG ZH Y, Et al., Status and progress of research on key technologies of intelligent spraying robots [ J ], Journal of Mechanical Engineering, 58, 7, pp. 53-74, (2022)
  • [2] WAN Z, LAI L, YIN X, Et al., Robot line structured light vision measurement system: Light strip center extraction and system calibration [ J ], Optical Engineering, 60, 11, (2021)
  • [3] CHEN S, LIU J, CHEN B, Et al., Universal fillet weld joint recognition and positioning for robot welding using structured light [ J], Robotics and Computer-Integrated Manufacturing, 74, 2, (2022)
  • [4] YANG CHEN L, MA Z, Et al., Computer vision-based high-quality tea automatic plucking robot using Delta parallel manipulator [ J ], Computers and Electronics in Agriculture, 181, 7, (2021)
  • [5] WANG F, LIANG CH, HAN X G, Et al., Vision guidance for welding robots based on weld recognition and position estimation, Control and Decision, 35, 8, pp. 1873-1878, (2020)
  • [6] HUYNH B P, SU S F, KUO Y L., Vision/ position hybrid control for a hexa robot using bacterial foraging optimization in real-time pose adjustment[J], Symmetry, 12, 4, (2020)
  • [7] HE K, GKIOXARI G, DOLLAR P, Et al., Mask R-CNN[C], International Conference on Computer Vision, (2017)
  • [8] GE J Y, SHI J L, ZHOU ZH Q, Et al., A robot grasping method based on 3D-detection network [ J ], Chinese Journal of Scientific Instrument, 41, 8, pp. 146-153, (2021)
  • [9] GUO N, LI J Y, REN X., A review of deep learning-based methods for rigid body posture estimation, Computer Science, 50, 2, pp. 178-189, (2023)
  • [10] VOCK R, DIECKMANN A, OCHMANN S, Et al., Fast template matching and pose estimation in 3D point clouds, Computers & Graphics, (2019)