CAM-Vtrans: real-time sports training utilizing multi-modal robot data

被引:0
|
作者
Hong, LinLin [1 ]
Lee, Sangheang [1 ]
Song, GuanTing [2 ]
机构
[1] Jeonju Univ, Coll Phys Educ, Jeonju, Jeonrabug Do, South Korea
[2] Gongqing Inst Sci & Technol, Jiujiang, Jiangxi, Peoples R China
来源
关键词
assistive robotics; human-machine interaction; balance control; movement recovery; vision-transformer; CLIP; cross-attention; REPRESENTATION; CLASSIFICATION;
D O I
10.3389/fnbot.2024.1453571
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Introduction Assistive robots and human-robot interaction have become integral parts of sports training. However, existing methods often fail to provide real-time and accurate feedback, and they often lack integration of comprehensive multi-modal data.Methods To address these issues, we propose a groundbreaking and innovative approach: CAM-Vtrans-Cross-Attention Multi-modal Visual Transformer. By leveraging the strengths of state-of-the-art techniques such as Visual Transformers (ViT) and models like CLIP, along with cross-attention mechanisms, CAM-Vtrans harnesses the power of visual and textual information to provide athletes with highly accurate and timely feedback. Through the utilization of multi-modal robot data, CAM-Vtrans offers valuable assistance, enabling athletes to optimize their performance while minimizing potential injury risks. This novel approach represents a significant advancement in the field, offering an innovative solution to overcome the limitations of existing methods and enhance the precision and efficiency of sports training programs.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] ACTIVE LINGUISTIC AUTHENTICATION USING REAL-TIME STYLOMETRIC EVALUATION FOR MULTI-MODAL DECISION FUSION
    Stolerman, Ariel
    Fridman, Alex
    Greenstadt, Rachel
    Brennan, Patrick
    Juola, Patrick
    ADVANCES IN DIGITAL FORENSICS X, 2014, 433 : 165 - 183
  • [42] GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection
    Gou, Yingdong
    Wang, Kexin
    Wei, Siwen
    Shi, Changxin
    INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, 2023, 31 (06) : 957 - 973
  • [43] Real-time multi-modal rigid registration based on a novel symmetric-SIFT descriptor
    Jian Chen Jie Tian Institute of Automation Chinese Academy of Science Beijing China
    ProgressinNaturalScience, 2009, 19 (05) : 643 - 651
  • [44] Real-time dense small object detection algorithm based on multi-modal tea shoots
    Shuai, Luyu
    Chen, Ziao
    Li, Zhiyong
    Li, Hongdan
    Zhang, Boda
    Wang, Yuchao
    Mu, Jiong
    FRONTIERS IN PLANT SCIENCE, 2023, 14
  • [45] RTSI: An Index Structure for Multi-Modal Real-Time Search on Live Audio Streaming Services
    Wen, Zeyi
    Liu, Xingyang
    Cao, Hongjian
    He, Bingsheng
    2018 IEEE 34TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE), 2018, : 1495 - 1506
  • [46] Effects of a Public Real-Time Multi-Modal Transportation Information Display on Travel Behavior and Attitudes
    Ge, Yanbo
    Jabbari, Parastoo
    MacKenzie, Don
    Tao, Jiarui
    JOURNAL OF PUBLIC TRANSPORTATION, 2017, 20 (02) : 40 - 65
  • [47] Real-time multi-modal rigid registration based on a novel symmetric-SIFT descriptor
    Chen, Jian
    Tian, Jie
    PROGRESS IN NATURAL SCIENCE-MATERIALS INTERNATIONAL, 2009, 19 (05) : 643 - 651
  • [48] Multi-modal biochip for simultaneous, real-time measurement of adhesion and electrical activity of neurons in culture
    Khraiche, Massoud
    Muthuswamy, Jit
    LAB ON A CHIP, 2012, 12 (16) : 2930 - 2941
  • [49] Facilitating Multi-modal Locomotion in a Quadruped Robot utilizing Passive Oscillation of the Spine Structure
    Takuma, Takashi
    Ikeda, Masahiro
    Masuda, Tatsuya
    IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, : 4940 - 4945
  • [50] Multi-modal Data Fusion for People Perception in the Social Robot Haru
    Ragel, Ricardo
    Rey, Rafael
    Paez, Lvaro
    Ponce, Javier
    Nakamura, Keisuke
    Caballero, Fernando
    Merino, Luis
    Gomez, Randy
    SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 174 - 187