Object Grasp Control of a 3D Robot Arm by Combining EOG Gaze Estimation and Camera-Based Object Recognition

被引:1
|
作者
bin Suhaimi, Muhammad Syaiful Amri [1 ,2 ]
Matsushita, Kojiro [2 ,3 ]
Kitamura, Takahide [2 ,3 ]
Laksono, Pringgo Widyo [2 ,4 ]
Sasaki, Minoru [2 ,3 ]
机构
[1] Univ Tunku Abdul Rahman, Fac Informat & Commun Technol, Jalan Univ, Bandar Barat 31900, Kampar, Malaysia
[2] Gifu Univ, Grad Sch Engn, 1-1 Yanagido, Gifu 5011193, Japan
[3] Tokai Natl Higher Educ & Res Syst, Intelligent Prod Technol Res & Dev Ctr Aerosp IPTe, Gifu 5011193, Japan
[4] Univ Sebelas Maret, Fac Engn, Ind Engn, Surakarta 57126, Indonesia
关键词
EOG; gaze estimation; robot arm; object grasp; welfare robot;
D O I
10.3390/biomimetics8020208
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The purpose of this paper is to quickly and stably achieve grasping objects with a 3D robot arm controlled by electrooculography (EOG) signals. A EOG signal is a biological signal generated when the eyeballs move, leading to gaze estimation. In conventional research, gaze estimation has been used to control a 3D robot arm for welfare purposes. However, it is known that the EOG signal loses some of the eye movement information when it travels through the skin, resulting in errors in EOG gaze estimation. Thus, EOG gaze estimation is difficult to point out the object accurately, and the object may not be appropriately grasped. Therefore, developing a methodology to compensate, for the lost information and increase spatial accuracy is important. This paper aims to realize highly accurate object grasping with a robot arm by combining EMG gaze estimation and the object recognition of camera image processing. The system consists of a robot arm, top and side cameras, a display showing the camera images, and an EOG measurement analyzer. The user manipulates the robot arm through the camera images, which can be switched, and the EOG gaze estimation can specify the object. In the beginning, the user gazes at the screen's center position and then moves their eyes to gaze at the object to be grasped. After that, the proposed system recognizes the object in the camera image via image processing and grasps it using the object centroid. The object selection is based on the object centroid closest to the estimated gaze position within a certain distance (threshold), thus enabling highly accurate object grasping. The observed size of the object on the screen can differ depending on the camera installation and the screen display state. Therefore, it is crucial to set the distance threshold from the object centroid for object selection. The first experiment is conducted to clarify the distance error of the EOG gaze estimation in the proposed system configuration. As a result, it is confirmed that the range of the distance error is 1.8-3.0 cm. The second experiment is conducted to evaluate the performance of the object grasping by setting two thresholds from the first experimental results: the medium distance error value of 2 cm and the maximum distance error value of 3 cm. As a result, it is found that the grasping speed of the 3 cm threshold is 27% faster than that of the 2 cm threshold due to more stable object selection.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Camera-based 3D object tracking and following mobile robot
    Mir-Nasiri, Nazim
    [J]. 2006 IEEE CONFERENCE ON ROBOTICS, AUTOMATION AND MECHATRONICS, VOLS 1 AND 2, 2006, : 290 - 295
  • [2] Adaptive Ground Plane Estimation for Moving Camera-Based 3D Object Tracking
    Liu, Tao
    Liu, Yong
    Tang, Zheng
    Hwang, Jenq-Neng
    [J]. 2017 IEEE 19TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2017,
  • [3] 3D Object Reconstruction Using a Robot Arm
    Rossi, Cesare
    Savino, Sergio
    Strano, Salvatore
    [J]. PROCEEDINGS OF EUCOMES 08, THE SECOND EUROPEAN CONFERENCE ON MECHANISM SCIENCE, 2009, : 513 - +
  • [4] A novel object slicing based grasp planner for 3D object grasping using underactuated robot gripper
    Sainul, I. A.
    Deb, Sankha
    Deb, A. K.
    [J]. 45TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY (IECON 2019), 2019, : 585 - 590
  • [5] Combining 3D Shape and Color for 3D Object Recognition
    Brandao, Susana
    Costeira, Joao P.
    Veloso, Manuela
    [J]. IMAGE ANALYSIS AND RECOGNITION (ICIAR 2016), 2016, 9730 : 481 - 489
  • [6] 3D grasp synthesis based on object exploration
    Chinellato, Eris
    Recatalá, Gabriel
    del Pobil, Angel P.
    Mezouar, Youcef
    Martinet, Philippe
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS, VOLS 1-3, 2006, : 1065 - +
  • [7] Camera-based gesture recognition for robot control
    Corradini, A
    Gross, HM
    [J]. IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL IV, 2000, : 133 - 138
  • [8] RGB-D Camera-based Object Grounding Surface Estimation System
    Natori, Natsuki
    Mikuriya, Masayuki
    Nakayama, Yu
    Ogino, Fumitoshi
    [J]. 2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 586 - 589
  • [9] 3D Vision for Object Grasp and Obstacle Avoidance of a Collaborative Robot
    Song, Kai-Tai
    Chang, Yu-Hsien
    Chen, Jen-Hao
    [J]. 2019 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2019, : 254 - 258
  • [10] DNN Based Camera and Lidar Fusion Framework for 3D Object Recognition
    Zhang, K.
    Wang, S. J.
    Ji, L.
    Wang, C.
    [J]. 2020 4TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY (CMVIT 2020), 2020, 1518