VTG: A Visual-Tactile Dataset for Three-Finger Grasp

被引:0
|
作者
Li, Tong [1 ]
Yan, Yuhang [1 ]
Yu, Chengshun [1 ]
An, Jing [1 ]
Wang, Yifan [1 ]
Zhu, Xiaojun [2 ]
Chen, Gang [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Intelligent Engn & Automat, Beijing 100876, Peoples R China
[2] Jianghuai Adv Technol Ctr, Hefei, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Grasping; Robots; Visualization; Robot kinematics; Tactile sensors; Point cloud compression; Force; Sensors; Stability criteria; Shape; Visual-tactile dataset; three-fingered robotic grasping; grasping stability prediction; grasping control;
D O I
10.1109/LRA.2024.3477168
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Three-fingered hands can offer more contact points and flexible fingertip configurations, enabling complex grasping modes and finer manipulations for objects of various shapes and sizes. However, existing research on visual-tactile integrated robotic grasping primarily focuses on grippers, and lacks a general dataset merging visual and tactile information for the entire grasping process. In this letter, we introduce the VTG dataset, which can be used for various aspects of three-fingered robotic grasping control. The VTG dataset includes three-view point clouds of objects, grasping modes, and finger angles of three-fingered hands, as well as tactile data at multiple spatial contact locations during the grasping process. By integrating visual and tactile information, we develop a robotic grasping controller that leverages a grasping stability prediction module and a grasping adjustment module. By representing tactile data as a static graphical structure based on the spatial distribution of tactile sensors, the grasping stability prediction module is constructed based on a multi-scale graph neural network, MS-GCN. It combines multi-scale graph topological features with various grasping modes, and achieving an accuracy of 98.4% in robotic grasping stability prediction. Additionally, this controller successfully adapts to unknown objects of varying hardness and shapes, providing stable grasping within approximately 0.4 s after contact.
引用
收藏
页码:10684 / 10691
页数:8
相关论文
共 50 条
  • [21] Visual-Tactile Perception of Biobased Composites
    Thundathil, Manu
    Nazmi, Ali Reza
    Shahri, Bahareh
    Emerson, Nick
    Muessig, Joerg
    Huber, Tim
    MATERIALS, 2023, 16 (05)
  • [22] Visual-Tactile Fusion for Object Recognition
    Liu, Huaping
    Yu, Yuanlong
    Sun, Fuchun
    Gu, Jason
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2017, 14 (02) : 996 - 1008
  • [23] Design of a novel Three-Finger haptic grasping system: Extending a Single point to Tripod grasp
    Rahul, Ravindran
    Jose, James
    Harish, Mohan T.
    Bhavani, Rao R.
    PROCEEDINGS OF THE ADVANCES IN ROBOTICS (AIR'17), 2017,
  • [24] Deforming Skin Illusion by Visual-tactile Stimulation
    Haraguchi, Gakumaru
    Kitazaki, Michiteru
    29TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY, VRST 2023, 2023,
  • [25] Lifelong robotic visual-tactile perception learning
    Dong, Jiahua
    Cong, Yang
    Sun, Gan
    Zhang, Tao
    PATTERN RECOGNITION, 2022, 121
  • [26] AViTa: Adaptive Visual-Tactile Dexterous Grasping
    Yan, Hengxu
    Fang, Hao-Shu
    Lu, Cewu
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (11): : 9462 - 9469
  • [27] Visual-tactile spatial interaction in saccade generation
    Adele Diederich
    Hans Colonius
    Daniela Bockhorst
    Sandra Tabeling
    Experimental Brain Research, 2003, 148 : 328 - 337
  • [28] Digital twin-enabled grasp outcomes assessment for unknown objects using visual-tactile fusion perception
    Zhang, Zhuangzhuang
    Zhang, Zhinan
    Wang, Lihui
    Zhu, Xiaoxiao
    Huang, Huang
    Cao, Qixin
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2023, 84
  • [29] Visual-Tactile Speech Perception and the Autism Quotient
    Derrick, Donald
    Bicevskis, Katie
    Gick, Bryan
    FRONTIERS IN COMMUNICATION, 2019, 3
  • [30] Visual-tactile spatial interaction in saccade generation
    Diederich, A
    Colonius, H
    Bockhorst, D
    Tabeling, S
    EXPERIMENTAL BRAIN RESEARCH, 2003, 148 (03) : 328 - 337