A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks

被引:0
|
作者
Hatori, Koki [1 ]
Morikura, Takashi [2 ]
Funahashi, Akira [2 ]
Takemura, Kenjiro [3 ]
机构
[1] Keio Univ, Sch Sci Open & Environm Syst, Yokohama 2238522, Japan
[2] Keio Univ, Dept Biosci & Informat, Yokohama 2238522, Japan
[3] Keio Univ, Dept Mech Engn, Yokohama 2238522, Japan
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Generators; Visualization; Vectors; Training; Acoustics; Solid modeling; Generative adversarial networks; Fingers; Ultrasonic transducers; Tactile sensors; Tactile reproduction; cross-modal recognition; conditional generative adversarial networks;
D O I
10.1109/ACCESS.2025.3527946
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Tactile reproduction technology represents a promising advancement within the rapidly expanding field of virtual/augmented reality, necessitating the development of innovative methods specifically tailored to correspond with tactile sensory labels. Since human tactile perception is known to be influenced by visual information, this study has developed a cross-modal tactile sensory display using Conditional Generative Adversarial Networks, CGANs, to generate both mechanical and visual information. Initially, sensory evaluation experiments were conducted with 32 participants using twelve metal plate samples to collect tactile information. Subsequently, we prepared 320 images of variety of materials and conducted sensory evaluation experiments with 30 participants per image to gather tactile information evoked by viewing the images. Utilizing the collected tactile information, used as labels, and images as a dataset, we developed four types of visual information generation models using CGAN, each trained with weighted concatenated data of images and labels, in which image elements are amplified by factors of 1, 1,000, 5,000, and 10,000, respectively. Each of these four models was then used to generate twelve images corresponding to the sensory evaluation result of twelve different metal plate samples. We performed a cross-modal tactile reproduction experiment using the previously developed tactile information generation model to input signals to a tactile display, alongside the images generated by the visual information generation model. In this experiment, 20 subjects conducted sensory evaluations where tactile sensations were displayed concurrently with the visual display of the images. The results confirmed that the concurrent display of mechanical and visual information significantly reduced the mean absolute error between the displayed tactile information and that of the metal plate samples from 2.2 to 1.6 out of a 7-digit scale in sensory evaluation. These findings underscore the effectiveness of the visual information generation model and highlight the potential of integrating tactile and visual information for enhanced tactile reproduction systems.
引用
收藏
页码:9223 / 9229
页数:7
相关论文
共 50 条
  • [21] Temporal discrimination of unimodal and cross-modal tactile and visual stimuli in primary dystonia
    Tinazzi, M
    Fiorio, M
    Smania, N
    Tamburin, S
    Fiaschi, A
    Aglioti, SM
    MOVEMENT DISORDERS, 2002, 17 : S305 - S305
  • [22] Spatial constraints on visual-tactile cross-modal distractor congruency effects
    Charles Spence
    Francesco Pavani
    Jon Driver
    Cognitive, Affective, & Behavioral Neuroscience, 2004, 4 : 148 - 169
  • [23] Spatial constraints on visual-tactile cross-modal distractor congruency effects
    Spence, Charles
    Pavani, Francesco
    Driver, Jon
    COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE, 2004, 4 (02) : 148 - 169
  • [24] Unsupervised Generative Adversarial Cross-Modal Hashing
    Zhang, Jian
    Peng, Yuxin
    Yuan, Mingkuan
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 539 - 546
  • [25] Cross-modal attentional interference in rapid serial visual-and-tactile presentations
    Hashimoto, F
    Nakayama, M
    Hayashi, M
    Yamamoto, Y
    PERCEPTION, 2003, 32 : 98 - 98
  • [26] The Affective Experience of Handling Digital Fabrics: Tactile and Visual Cross-Modal Effects
    Wu, Di
    Wu, Ting-, I
    Singh, Harsimrat
    Padilla, Stefano
    Atkinson, Douglas
    Bianchi-Berthouze, Nadia
    Chantler, Mike
    Baurley, Sharon
    AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, PT I, 2011, 6974 : 427 - +
  • [27] Tactile-visual cross-modal shape matching: a functional MRI study
    Saito, DN
    Okada, T
    Morita, Y
    Yonekura, Y
    Sadato, N
    COGNITIVE BRAIN RESEARCH, 2003, 17 (01): : 14 - 25
  • [28] Temporal Electroencephalography Traits Dissociating Tactile Information and Cross-Modal Congruence Effects
    Ozawa, Yusuke
    Yoshimura, Natsue
    SENSORS, 2024, 24 (01)
  • [29] Cross-modal attentional deficits in processing tactile stimulation
    Roberto Dell’Acqua
    Massimo Turatto
    Pierre Jolicoeur
    Perception & Psychophysics, 2001, 63 : 777 - 789
  • [30] The Tactile Dimensions of Abstract Paintings: A Cross-Modal Study
    Albertazzi, Liliana
    Bacci, Francesca
    Canal, Luisa
    Micciolo, Rocco
    PERCEPTION, 2016, 45 (07) : 805 - 822