FIGI: floating interface for gesture-based interaction

被引:0
|
作者
M. De Marsico
S. Levialdi
M. Nappi
S. Ricciardi
机构
[1] Sapienza University of Rome,Computer Science Department
[2] University of Salerno,undefined
[3] VRLab,undefined
关键词
Gesture-based interface; 3D object manipulation; Contact-less interaction; Mixed reality; Medical imaging;
D O I
暂无
中图分类号
学科分类号
摘要
Mixed reality represents a promising technology for a wide range of applicative fields, including computer based training, systems maintenance and medical imaging, just to name a few. The floating interface for gesture-based interaction architecture presented in this study, puts together a context adaptive head-up interface, which is projected in the central region of the user’s visual field, with gesture-based interaction, to enable easy, robust and powerful manipulation of the virtual contents which are visualized after being mapped onto the real environment surrounding the user. The interaction paradigm combines one-hand, two-hands and time-based gestures to select tools/functions among those available as well as to operate them. Even conventional keyboard-based functions like typing, can be performed without a physical interface by means of a floating keyboard layout. The paper describes the overall system architecture and its application to the interactive visualization of tri-dimensional models of human anatomy, for either training or educational purposes. We also report the results of an evaluation study to assess usability, effectiveness and eventual limitations of the proposed approach.
引用
收藏
页码:511 / 524
页数:13
相关论文
共 50 条
  • [1] FIGI: floating interface for gesture-based interaction
    De Marsico, M.
    Levialdi, S.
    Nappi, M.
    Ricciardi, S.
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2014, 5 (04) : 511 - 524
  • [2] Developing a gesture-based interface
    Gupta, N
    Mittal, P
    Roy, SD
    Chaudhury, S
    Banerjee, S
    IETE JOURNAL OF RESEARCH, 2002, 48 (3-4) : 237 - 244
  • [3] A Gesture-based Multimodal Interface for Human-Robot Interaction
    Uimonen, Mikael
    Kemppi, Paul
    Hakanen, Taru
    2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 165 - 170
  • [4] The Potential of Gesture-Based Interaction
    Rise, Kasper
    Alsos, Ole Andreas
    HUMAN-COMPUTER INTERACTION. MULTIMODAL AND NATURAL INTERACTION, HCI 2020, PT II, 2020, 12182 : 125 - 136
  • [5] Gesture-Based Interaction: Visual Gesture Mapping
    Rise, Kasper
    Alsos, Ole Andreas
    HUMAN-COMPUTER INTERACTION. MULTIMODAL AND NATURAL INTERACTION, HCI 2020, PT II, 2020, 12182 : 106 - 124
  • [6] A Gesture-Based Interface and Active Cinema
    Chavez, Mark J.
    Kyaw, Aung Sithu
    AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, PT II, 2011, 6975 : 309 - 310
  • [7] Gesture-Based Interaction in Medical Interfaces
    Virag, Ioan
    Stoicu-Tivadar, Lacramioara
    Crisan-Vida, Mihaela
    2016 IEEE 11TH INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2016, : 519 - 523
  • [8] Hotspot components for gesture-based interaction
    Jaimes, A
    Liu, JY
    HUMAN-COMPUTER INTERACTION - INTERACT 2005, PROCEEDINGS, 2005, 3585 : 1062 - 1066
  • [9] Medianeum: Gesture-Based Ergonomic Interaction
    Zajega, Francois
    Picard-Limpens, Cecile
    Rene, Julie
    Puleo, Antonin
    Decuypere, Justine
    Frisson, Christian
    Ravet, Thierry
    Mancas, Matei
    INTELLIGENT TECHNOLOGIES FOR INTERACTIVE ENTERTAINMENT, 2013, 124 : 96 - 103
  • [10] Gesture-based interaction with a pet robot
    Oracle Corp, Redwood Shores, United States
    Proc Natl Conf Artif Intell, (628-633):