Real-Time Robot End-Effector Pose Estimation with Deep Network

被引:2
|
作者
Cheng, Hu [1 ]
Wang, Yingying [1 ]
Meng, Max Q-H [2 ,3 ]
机构
[1] Chinese Univ Hong Kong, Dept Elect Engn, Robot Percept & Artificial Intelligence Lab, Hong Kong, Peoples R China
[2] SouthernUniv Sci & Technol, Dept Elect & Elect Engn, Shenzhen, Peoples R China
[3] Chinese Univ Hong Kong, Shenzhen Res Inst, Shenzhen, Peoples R China
关键词
D O I
10.1109/IROS45743.2020.9341760
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a novel algorithm that estimates the pose of the robot end effector using depth vision. The input to our system is the segmented robot hand point cloud from a depth sensor. Then a neural network takes a point cloud as input and outputs the position and orientation of the robot end effector in the camera frame. The estimated pose can serve as the input of the controller of the robot to reach a specific pose in the camera frame. The training process of the neural network takes the simulated rendered point cloud generated from different poses of the robot hand mesh. At test time, one estimation of a single robot hand pose is reduced to 10ms on gpu and 14ms on cpu, which makes it suitable for close loop robot control system that requires to estimate hand pose in an online fashion. We design a robot hand pose estimation experiment to validate the effectiveness of our algorithm working in the real situation. The platform we used includes a Kinova Jaco 2 robot arm and a Kinect v2 depth sensor. We describe all the processes that use vision to improve the accuracy of pose estimation of the robot end-effector. We demonstrate the possibility of using point cloud to directly estimate the robot's end-effector pose and incorporate the estimated pose into the controller design of the robot arm.
引用
下载
收藏
页码:10921 / 10926
页数:6
相关论文
共 50 条
  • [1] Adaptive real-time estimation of end-effector position and orientation using precise measurements of end-effector position
    Lertpiriyasuwat, Vatchara
    Berg, Martin C.
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2006, 11 (03) : 304 - 319
  • [2] Single Image based Camera Calibration and Pose Estimation of the End-effector of a Robot
    Boby, R. A.
    Saha, S. K.
    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2016, : 2435 - 2440
  • [3] Pose Estimation of Robot End-Effector using a CNN-Based Cascade Estimator
    Ortega, Kevin D.
    Sepulveda, Jorge I.
    Hernandez, Byron
    Holguin, German A.
    Medeiros, Henry
    2023 IEEE 6TH COLOMBIAN CONFERENCE ON AUTOMATIC CONTROL, CCAC, 2023, : 85 - 90
  • [4] Pose Planning for the End-effector of Robot in the Welding of Intersecting Pipes
    Liu Yu
    Zhao Jing
    Lu Zhenyang
    Chen Shujun
    CHINESE JOURNAL OF MECHANICAL ENGINEERING, 2011, 24 (02) : 264 - 270
  • [6] Pose estimation method for a simultaneous three-fingered end-effector
    Fan S.
    Wu J.
    Jin M.
    Fan C.
    Liu H.
    Harbin Gongcheng Daxue Xuebao/Journal of Harbin Engineering University, 2019, 40 (02): : 359 - 364
  • [7] Real-time end-effector path following for robot manipulators subject to velocity, acceleration, and jerk joint limits
    Antonelli, G
    Chiaverini, S
    Fusco, G
    2001 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS PROCEEDINGS, VOLS I AND II, 2001, : 452 - 457
  • [8] End-to-End Feature Pyramid Network for Real-Time Multi-Person Pose Estimation
    Luo, Dingli
    Du, Songlin
    Ikenaga, Takeshi
    PROCEEDINGS OF MVA 2019 16TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA), 2019,
  • [9] SP-YOLO: an end-to-end lightweight network for real-time human pose estimation
    Yuting Zhang
    Zongyan Wang
    Menglong Li
    Pei Gao
    Signal, Image and Video Processing, 2024, 18 : 863 - 876
  • [10] SP-YOLO: an end-to-end lightweight network for real-time human pose estimation
    Zhang, Yuting
    Wang, Zongyan
    Li, Menglong
    Gao, Pei
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (01) : 863 - 876