Development and Evaluation of a Learning-based Model for Real-time Haptic Texture Rendering

被引:0
|
作者
Heravi N. [1 ]
Culbertson H. [2 ]
Okamura A.M. [1 ]
Bohg J. [3 ]
机构
[1] Department of Mechanical Engineering, Stanford University
[2] Department of Computer Science, University of Southern California
[3] Department of Computer Science, Stanford University
关键词
Artificial Intelligence; Data models; Haptic interfaces; Haptics; Machine Learning; Predictive models; Real-time systems; Rendering (computer graphics); Solid modeling; Surface texture; Texture;
D O I
10.1109/TOH.2024.3382258
中图分类号
学科分类号
摘要
Current Virtual Reality (VR) environments lack the haptic signals that humans experience during real-life interactions, such as the sensation of texture during lateral movement on a surface. Adding realistic haptic textures to VR environments requires a model that generalizes to variations of a user's interaction and to the wide variety of existing textures in the world. Current methodologies for haptic texture rendering exist, but they usually develop one model per texture, resulting in low scalability. We present a deep learning-based action-conditional model for haptic texture rendering and evaluate its perceptual performance in rendering realistic texture vibrations through a multi-part human user study. This model is unified over all materials and uses data from a vision-based tactile sensor (GelSight) to render the appropriate surface conditioned on the user's action in real-time. For rendering texture, we use a high-bandwidth vibrotactile transducer attached to a 3D Systems Touch device. The results of our user study shows that our learning-based method creates high-frequency texture renderings with comparable or better quality than state-of-the-art methods without the need to learn a separate model per texture. Furthermore, we show that the method is capable of rendering previously unseen textures using a single GelSight image of their surface. IEEE
引用
收藏
页码:1 / 12
页数:11
相关论文
共 50 条
  • [1] PowerNet: Learning-Based Real-Time Power-Budget Rendering
    Zhang, Yunjin
    Wang, Rui
    Huo, Yuchi
    Hua, Wei
    Bao, Hujun
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (10) : 3486 - 3498
  • [2] Symmetric board-spring model for real-time haptic rendering
    Zhang, Xiaorui
    Sun, Wei
    Zhuang, Wei
    Chen, Zhuoyi
    [J]. International Journal of Digital Content Technology and its Applications, 2012, 6 (01) : 113 - 120
  • [3] Real-time area-based haptic rendering for a palpation simulator
    Kyung, Ki-Uk
    Park, Jinah
    Kwon, Dong-Soo
    Kim, Sang-Youn
    [J]. BIOMEDICAL SIMULATION, PROCEEDINGS, 2006, 4072 : 132 - 141
  • [4] Multirate output estimation for real-time haptic rendering
    Lee, Kyungno
    Lee, Doo Yong
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-10, 2006, : 3312 - +
  • [5] Incremental Texture Compression for Real-Time Rendering
    Tang, Ying
    Fan, Jing
    [J]. ADVANCES IN VISUAL COMPUTING, PT II, PROCEEDINGS, 2008, 5359 : 1076 - 1085
  • [6] Layered rhombus-chain-connected model for real-time haptic rendering
    Zhang, Xiaorui
    Sun, Wei
    Song, Aiguo
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2014, 41 (01) : 49 - 65
  • [7] Shape retaining chain linked model for real-time volume haptic rendering
    Park, J
    Kim, SY
    Son, SW
    Kwon, DS
    [J]. IEEE/ACM SIGGRAPH SYMPOSIUM ON VOLUME VISUALIZATION AND GRAPHICS 2002, PROCEEDINGS, 2002, : 65 - 72
  • [8] Layered rhombus-chain-connected model for real-time haptic rendering
    Xiaorui Zhang
    Wei Sun
    Aiguo Song
    [J]. Artificial Intelligence Review, 2014, 41 : 49 - 65
  • [9] Real-time Haptic Rendering and Haptic Telepresence Robotic System for the Visually Impaired
    Park, Chung Hyuk
    Howard, Ayanna M.
    [J]. 2013 WORLD HAPTICS CONFERENCE (WHC), 2013, : 229 - 234
  • [10] Haptic Texture Rendering Based on Visual Texture Information: A Study to Achieve Realistic Haptic Texture Rendering
    Adi, Waskito
    Sulaiman, Suziah
    [J]. VISUAL INFORMATICS: BRIDGING RESEARCH AND PRACTICE, 2009, 5857 : 279 - 287