Federated Closed-Loop Learning for Cross-Modal Generation of Tactile Friction in Tactile Internet

被引:0
|
作者
Zhang, Liping [1 ]
Wang, Haoming [1 ]
Yang, Lijing [1 ]
Liu, Guohong [1 ]
Wang, Cong [1 ]
Lv, Liheng [1 ]
机构
[1] Jilin Univ, Coll Commun Engn, Changchun 130025, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2025年 / 12卷 / 06期
关键词
Friction; Accuracy; Servers; Data models; Tactile Internet; Training; Visualization; Federated learning; Computational modeling; Force measurement; Closed-loop learning (CLL); cross-modal generation; federated learning (FL); tactile friction; tactile Internet; FRAMEWORK;
D O I
10.1109/JIOT.2024.3492274
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Tactile Internet, as a novel industrial network, allows fully immersive multisensory remote exploration of real or virtual environments. An important technological aspect in tactile Internet is the acquisition, compression, transmission, and display of haptic information. This article focuses on the cross-modal acquisition of fingertip's tactile friction from visual measurements. In tactile Internet applications, these tactile friction data are transmitted to surface haptic devices for high-fidelity haptic rendering of shapes and textures on touchscreens. To ensure the reliability and latency for such tactile friction acquisition, we develop a federated closed-loop learning (FedCLL) method that is based on the optimized federated learning and the closed-loop learning. The former builds the global model in the centric server, by utilizing deep reinforcement learning to determine aggregation weights of local tactile devices, which improves the acquisition accuracy; The latter generates tactile friction for local devices, by exploring feedback mechanism to achieve improved accuracy and reduced complexity. The proposed FedCLL is numerically evaluated, using HapTex dataset. The results show that FedCLL outperforms existent methods in both acquisition accuracy and computational complexity.
引用
收藏
页码:7026 / 7036
页数:11
相关论文
共 50 条
  • [21] Cross-modal tactile-taste interactions in food evaluations
    Slocombe, B. G.
    Carmichael, D. A.
    Simner, J.
    NEUROPSYCHOLOGIA, 2016, 88 : 58 - 64
  • [22] Cross-modal congruency benefits for combined tactile and visual signaling
    Merlo, James L.
    Duley, Aaron R.
    Hancock, Peter A.
    AMERICAN JOURNAL OF PSYCHOLOGY, 2010, 123 (04): : 413 - 424
  • [23] Double closed-loop driving system for resonant tactile sensor
    Wang, Haodong
    Shen, Jingjin
    Xu, Rongqing
    Kong, Meimei
    Wang, Hongyi
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 3152 - 3156
  • [24] Federated learning for supervised cross-modal retrieval
    Li, Ang
    Li, Yawen
    Shao, Yingxia
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2024, 27 (04):
  • [25] A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks
    Hatori, Koki
    Morikura, Takashi
    Funahashi, Akira
    Takemura, Kenjiro
    IEEE ACCESS, 2025, 13 : 9223 - 9229
  • [26] Neural Closed-loop Control of a Hand Prosthesis using Cross-modal Haptic Feedback
    Gibson, Alison
    Artemiadis, Panagiotis
    PROCEEDINGS OF THE IEEE/RAS-EMBS INTERNATIONAL CONFERENCE ON REHABILITATION ROBOTICS (ICORR 2015), 2015, : 37 - 42
  • [27] Learning cross-modal visual-tactile representation using ensembled generative adversarial networks
    Li, Xinwu
    Liu, Huaping
    Zhou, Junfeng
    Sun, FuChun
    COGNITIVE COMPUTATION AND SYSTEMS, 2019, 1 (02) : 40 - 44
  • [28] When Visual Distractors Predict Tactile Search: The Temporal Profile of Cross-Modal Spatial Learning
    Chen, Siyi
    Shi, Zhuanghua
    Mueller, Hermann J.
    Geyer, Thomas
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 2021, 47 (09) : 1453 - 1470
  • [29] Deep Active Cross-Modal Visuo-Tactile Transfer Learning for Robotic Object Recognition
    Murali, Prajval Kumar
    Wang, Cong
    Lee, Dongheui
    Dahiya, Ravinder
    Kaboli, Mohsen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04): : 9557 - 9564
  • [30] Cross-modal metacognition: Visual and tactile confidence share a common scale
    Klever, Lena
    Beyvers, Marie Christin
    Fiehler, Katja
    Mamassian, Pascal
    Billino, Jutta
    JOURNAL OF VISION, 2023, 23 (05):