VTG: A Visual-Tactile Dataset for Three-Finger Grasp

被引:0
|
作者
Li, Tong [1 ]
Yan, Yuhang [1 ]
Yu, Chengshun [1 ]
An, Jing [1 ]
Wang, Yifan [1 ]
Zhu, Xiaojun [2 ]
Chen, Gang [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Intelligent Engn & Automat, Beijing 100876, Peoples R China
[2] Jianghuai Adv Technol Ctr, Hefei, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Grasping; Robots; Visualization; Robot kinematics; Tactile sensors; Point cloud compression; Force; Sensors; Stability criteria; Shape; Visual-tactile dataset; three-fingered robotic grasping; grasping stability prediction; grasping control;
D O I
10.1109/LRA.2024.3477168
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Three-fingered hands can offer more contact points and flexible fingertip configurations, enabling complex grasping modes and finer manipulations for objects of various shapes and sizes. However, existing research on visual-tactile integrated robotic grasping primarily focuses on grippers, and lacks a general dataset merging visual and tactile information for the entire grasping process. In this letter, we introduce the VTG dataset, which can be used for various aspects of three-fingered robotic grasping control. The VTG dataset includes three-view point clouds of objects, grasping modes, and finger angles of three-fingered hands, as well as tactile data at multiple spatial contact locations during the grasping process. By integrating visual and tactile information, we develop a robotic grasping controller that leverages a grasping stability prediction module and a grasping adjustment module. By representing tactile data as a static graphical structure based on the spatial distribution of tactile sensors, the grasping stability prediction module is constructed based on a multi-scale graph neural network, MS-GCN. It combines multi-scale graph topological features with various grasping modes, and achieving an accuracy of 98.4% in robotic grasping stability prediction. Additionally, this controller successfully adapts to unknown objects of varying hardness and shapes, providing stable grasping within approximately 0.4 s after contact.
引用
收藏
页码:10684 / 10691
页数:8
相关论文
共 50 条
  • [31] A Reconfigurable Three-Finger Robotic Gripper
    Li, Guozhi
    Fu, Cong
    Zhang, Fuhai
    Wang, Shuguo
    2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION, 2015, : 1556 - 1561
  • [32] Research on Generation Method of Grasp Strategy Based on DeepLab V3+ for Three-Finger Gripper
    Jiang, Sanlong
    Li, Shaobo
    Bai, Qiang
    Yang, Jing
    Miao, Yanming
    Chen, Leiyu
    INFORMATION, 2021, 12 (07)
  • [33] Tactile, Audio, and Visual Dataset During Bare Finger Interaction with Textured Surfaces
    Devillard, Alexis W. M.
    Ramasamy, Aruna
    Cheng, Xiaoxiao
    Faux, Damien
    Burdet, Etienne
    SCIENTIFIC DATA, 2025, 12 (01)
  • [34] Comparison of Visual and Visual-Tactile Inspection of Aircraft Engine Blades
    Aust, Jonas
    Mitrovic, Antonija
    Pons, Dirk
    AEROSPACE, 2021, 8 (11)
  • [35] Cross-modal interference in rapid serial visual-tactile, tactile-visual presentations
    Hashimoto, F
    Nakayama, M
    Hayashi, M
    Endo, Y
    PERCEPTION, 2005, 34 : 81 - 81
  • [36] Tactile, Visual, and Crossmodal Visual-Tactile Change Blindness: The Effect of Transient Type and Task Demands
    Riggs, Sara Lu
    Sarter, Nadine
    HUMAN FACTORS, 2019, 61 (01) : 5 - 24
  • [37] INDSCAL ANALYSIS OF PERCEPTUAL JUDGMENTS FOR 24 CONSONANTS VIA VISUAL, TACTILE, AND VISUAL-TACTILE INPUTS
    DANHAUER, JL
    APPEL, MA
    JOURNAL OF SPEECH AND HEARING RESEARCH, 1976, 19 (01): : 68 - 77
  • [38] Vision of a pictorial hand modulates visual-tactile interactions
    Yuka Igarashi
    Norimichi Kitagawa
    Shigeru Ichihara
    Cognitive, Affective, & Behavioral Neuroscience, 2004, 4 : 182 - 192
  • [39] Visual-tactile manipulation to collect household waste in outdoor
    Castano-Amoros, Julio
    Paez-Ubieta, Ignacio de Loyola
    Gil, Pablo
    Puente, Santiago Timoteo
    REVISTA IBEROAMERICANA DE AUTOMATICA E INFORMATICA INDUSTRIAL, 2023, 20 (02): : 163 - 174
  • [40] Visual-tactile multisensory integration in primate parietal operculum
    Hihara, Sayaka
    Iriki, Atsushi
    Taoka, Miki
    Tanaka, Michio
    JOURNAL OF PHYSIOLOGICAL SCIENCES, 2013, 63 : S253 - S253