Controllable Visual-Tactile Synthesis

被引:2
|
作者
Gao, Ruihan [1 ]
Yuan, Wenzhen [1 ]
Zhu, Jun-Yan [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
GRASP;
D O I
10.1109/ICCV51070.2023.00648
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. In this work, we leverage deep generative models to create a multi-sensory experience where users can touch and see the synthesized object when sliding their fingers on a haptic surface. The main challenges lie in the significant scale discrepancy between vision and touch sensing and the lack of explicit mapping from touch sensing data to a haptic rendering device. To bridge this gap, we collect high-resolution tactile data with a GelSight sensor and create a new visuotactile clothing dataset. We then develop a conditional generative model that synthesizes both visual and tactile outputs from a single sketch. We evaluate our method regarding image quality and tactile rendering accuracy. Finally, we introduce a pipeline to render high-quality visual and tactile outputs on an electroadhesion-based haptic device for an immersive experience, allowing for challenging materials and editable sketch inputs.
引用
收藏
页码:7017 / 7029
页数:13
相关论文
共 50 条
  • [1] Visual-tactile saccadic inhibition
    Åkerfelt, A
    Colonius, H
    Diederich, A
    EXPERIMENTAL BRAIN RESEARCH, 2006, 169 (04) : 554 - 563
  • [2] Visual-tactile saccadic inhibition
    Annika Åkerfelt
    Hans Colonius
    Adele Diederich
    Experimental Brain Research, 2006, 169 : 554 - 563
  • [3] Persistence of visual-tactile enhancement in humans
    Taylor-Clarke, M
    Kennett, S
    Haggard, P
    NEUROSCIENCE LETTERS, 2004, 354 (01) : 22 - 25
  • [4] A Visual-Tactile System Of Phonetical Symbolization
    Zaliouk, A.
    JOURNAL OF SPEECH AND HEARING DISORDERS, 1954, 19 (02): : 190 - 207
  • [5] Visual-Tactile Perception of Biobased Composites
    Thundathil, Manu
    Nazmi, Ali Reza
    Shahri, Bahareh
    Emerson, Nick
    Muessig, Joerg
    Huber, Tim
    MATERIALS, 2023, 16 (05)
  • [6] Visual-Tactile Fusion for Object Recognition
    Liu, Huaping
    Yu, Yuanlong
    Sun, Fuchun
    Gu, Jason
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2017, 14 (02) : 996 - 1008
  • [7] Deforming Skin Illusion by Visual-tactile Stimulation
    Haraguchi, Gakumaru
    Kitazaki, Michiteru
    29TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY, VRST 2023, 2023,
  • [8] Lifelong robotic visual-tactile perception learning
    Dong, Jiahua
    Cong, Yang
    Sun, Gan
    Zhang, Tao
    PATTERN RECOGNITION, 2022, 121
  • [9] AViTa: Adaptive Visual-Tactile Dexterous Grasping
    Yan, Hengxu
    Fang, Hao-Shu
    Lu, Cewu
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (11): : 9462 - 9469
  • [10] Visual-tactile spatial interaction in saccade generation
    Adele Diederich
    Hans Colonius
    Daniela Bockhorst
    Sandra Tabeling
    Experimental Brain Research, 2003, 148 : 328 - 337