Implementation of ActiveCube for multi-modal interaction

被引:0
|
作者
Itoh, Y [1 ]
Kitamura, Y [1 ]
Kawai, M [1 ]
Kishino, F [1 ]
机构
[1] Osaka Univ, Suita, Osaka 5650871, Japan
关键词
real-time interaction; bi-directional interface; input; output; sensor; actuator; display;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose the ActiveCube system, which allows a user to construct and interact with a 3D environment by using cubes with a bi-directional user interface. A computer recognizes the 3D structure of connected cubes in real time by utilizing the real-time communication network among cubes. Also, ActiveCube is equipped with both input and output devices, at where the user expects to be, and this makes the interface intuitive and helps to clarify the causal relationship between the input of the user's operational intention and the output of simulated results. Consistency is always maintained between the real object and its corresponding representation in the computer in terms of object shape and functionalities.
引用
下载
收藏
页码:682 / 683
页数:2
相关论文
共 50 条
  • [21] Multi-modal Interaction System for Enhanced User Experience
    Jeong, Yong Mu
    Min, Soo Young
    Lee, Seung Eun
    COMPUTER APPLICATIONS FOR WEB, HUMAN COMPUTER INTERACTION, SIGNAL AND IMAGE PROCESSING AND PATTERN RECOGNITION, 2012, 342 : 287 - +
  • [22] Multi-modal human robot interaction for map generation
    Saito, H
    Ishimura, K
    Hattori, M
    Takamori, T
    SICE 2002: PROCEEDINGS OF THE 41ST SICE ANNUAL CONFERENCE, VOLS 1-5, 2002, : 2721 - 2724
  • [23] Multi-Modal Interaction for Space Telescience of Fluid Experiments
    Yu, Ge
    Liang, Ji
    Guo, Lili
    AIVR 2018: 2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY, 2018, : 31 - 37
  • [24] Multi-modal human robot interaction for map generation
    Ghidary, SS
    Nakata, Y
    Saito, H
    Hattori, M
    Takamori, T
    IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4: EXPANDING THE SOCIETAL ROLE OF ROBOTICS IN THE NEXT MILLENNIUM, 2001, : 2246 - 2251
  • [25] A multi-modal approach to selective interaction in assistive domains
    Feil-Seifer, D
    Mataric, MJ
    2005 IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2005, : 416 - 421
  • [26] A role of multi-modal rhythms in physical interaction and cooperation
    Kenta Yonekura
    Chyon Hae Kim
    Kazuhiro Nakadai
    Hiroshi Tsujino
    Shigeki Sugano
    EURASIP Journal on Audio, Speech, and Music Processing, 2012
  • [27] Wearable Multi-modal Interface for Human Multi-robot Interaction
    Gromov, Boris
    Gambardella, Luca M.
    Di Caro, Gianni A.
    2016 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2016, : 240 - 245
  • [28] Multi-level Interaction Network for Multi-Modal Rumor Detection
    Zou, Ting
    Qian, Zhong
    Li, Peifeng
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [29] Implementation of a Multi-Modal Palliative Care Curriculum for Pediatric Residents
    Romanos-Sirakis, Eleny
    Demissie, Seleshi
    Fornari, Alice
    AMERICAN JOURNAL OF HOSPICE & PALLIATIVE MEDICINE, 2021, 38 (11): : 1322 - 1328
  • [30] Contextual and Cross-Modal Interaction for Multi-Modal Speech Emotion Recognition
    Yang, Dingkang
    Huang, Shuai
    Liu, Yang
    Zhang, Lihua
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2093 - 2097