Controllable Visual-Tactile Synthesis

被引:2
|
作者
Gao, Ruihan [1 ]
Yuan, Wenzhen [1 ]
Zhu, Jun-Yan [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
GRASP;
D O I
10.1109/ICCV51070.2023.00648
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. In this work, we leverage deep generative models to create a multi-sensory experience where users can touch and see the synthesized object when sliding their fingers on a haptic surface. The main challenges lie in the significant scale discrepancy between vision and touch sensing and the lack of explicit mapping from touch sensing data to a haptic rendering device. To bridge this gap, we collect high-resolution tactile data with a GelSight sensor and create a new visuotactile clothing dataset. We then develop a conditional generative model that synthesizes both visual and tactile outputs from a single sketch. We evaluate our method regarding image quality and tactile rendering accuracy. Finally, we introduce a pipeline to render high-quality visual and tactile outputs on an electroadhesion-based haptic device for an immersive experience, allowing for challenging materials and editable sketch inputs.
引用
收藏
页码:7017 / 7029
页数:13
相关论文
共 50 条
  • [31] Visual-Tactile Sensing for In-Hand Object Reconstruction
    Xu, Wenqiang
    Yu, Zhenjun
    Xue, Han
    Ye, Ruolin
    Yao, Siqiong
    Lu, Cewu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8803 - 8812
  • [32] Visual-Tactile Fused Graph Learning for Object Clustering
    Zhang, Tao
    Cong, Yang
    Sun, Gan
    Dong, Jiahua
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (11) : 12275 - 12289
  • [33] Perception of Visual-Tactile Colocation in the First Year of Life
    Freier, Livia
    Mason, Luke
    Bremner, Andrew J.
    DEVELOPMENTAL PSYCHOLOGY, 2016, 52 (12) : 2184 - 2190
  • [34] Effects of viewing distance on visual and visual-tactile evaluation of black fabric
    Isami, Chiari
    Kondo, Aki
    Goto, Aya
    Sukigara, Sachiko
    Journal of Fiber Science and Technology, 2021, 77 (02) : 56 - 65
  • [35] VISUAL-TACTILE AND TACTILE-TACTILE PAIRED-ASSOCIATE LEARNING BY NORMAL AND POOR READERS
    STEGER, JA
    VELLUTIN.FR
    MESHOULA.U
    PERCEPTUAL AND MOTOR SKILLS, 1972, 35 (01) : 263 - &
  • [36] Critique of "Tactile, Visual, and Crossmodal Visual-Tactile Change Blindness: The Effect of Transient Type and Task Demands"
    Greenlee, Eric T.
    HUMAN FACTORS, 2019, 61 (01) : 25 - 28
  • [37] GelFinger: A Novel Visual-Tactile Sensor With Multi-Angle Tactile Image Stitching
    Lin, Zhonglin
    Zhuang, Jiaquan
    Li, Yufeng
    Wu, Xianyu
    Luo, Shan
    Gomes, Daniel Fernandes
    Huang, Feng
    Yang, Zheng
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) : 5982 - 5989
  • [38] Attention Modulates Visual-Tactile Interaction in Spatial Pattern Matching
    Goeschl, Florian
    Engel, Andreas K.
    Friese, Uwe
    PLOS ONE, 2014, 9 (09):
  • [39] Robotic grasp slip detection based on visual-tactile fusion
    Cui S.
    Wei J.
    Wang R.
    Wang S.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2020, 48 (01): : 98 - 102
  • [40] VTG: A Visual-Tactile Dataset for Three-Finger Grasp
    Li, Tong
    Yan, Yuhang
    Yu, Chengshun
    An, Jing
    Wang, Yifan
    Zhu, Xiaojun
    Chen, Gang
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (11): : 10684 - 10691