Implementation of ActiveCube for multi-modal interaction

被引:0
|
作者
Itoh, Y [1 ]
Kitamura, Y [1 ]
Kawai, M [1 ]
Kishino, F [1 ]
机构
[1] Osaka Univ, Suita, Osaka 5650871, Japan
关键词
real-time interaction; bi-directional interface; input; output; sensor; actuator; display;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose the ActiveCube system, which allows a user to construct and interact with a 3D environment by using cubes with a bi-directional user interface. A computer recognizes the 3D structure of connected cubes in real time by utilizing the real-time communication network among cubes. Also, ActiveCube is equipped with both input and output devices, at where the user expects to be, and this makes the interface intuitive and helps to clarify the causal relationship between the input of the user's operational intention and the output of simulated results. Consistency is always maintained between the real object and its corresponding representation in the computer in terms of object shape and functionalities.
引用
收藏
页码:682 / 683
页数:2
相关论文
共 50 条
  • [1] Multi-Modal Interaction Device
    Kim, Yul Hee
    Byeon, Sang-Kyu
    Kim, Yu-Joon
    Choi, Dong-Soo
    Kim, Sang-Youn
    [J]. INTERNATIONAL CONFERENCE ON MECHANICAL DESIGN, MANUFACTURE AND AUTOMATION ENGINEERING (MDMAE 2014), 2014, : 327 - 330
  • [2] Multi-modal interaction in biomedicine
    Zudilova, EV
    Sloot, PMA
    [J]. AMBIENT INTELLIGENCE FOR SCIENTIFIC DISCOVERY: FOUNDATIONS, THEORIES, AND SYSTEMS, 2005, 3345 : 184 - 201
  • [3] Multi-Modal Interaction for Robotics Mules
    Taylor, Glenn
    Quist, Michael
    Lanting, Matthew
    Dunham, Cory
    Muench, Paul
    [J]. UNMANNED SYSTEMS TECHNOLOGY XIX, 2017, 10195
  • [4] QUALITY OF EXPERIENCING MULTI-MODAL INTERACTION
    Weiss, Benjamin
    Moeller, Sebastian
    Wechsung, Ina
    Kuehnel, Christine
    [J]. SPOKEN DIALOGUE SYSTEMS: TECHNOLOGY AND DESIGN, 2011, : 213 - 230
  • [5] Multi-modal interaction for UAS control
    Taylor, Glenn
    Purman, Ben
    Schermerhorn, Paul
    Garcia-Sampedro, Guillermo
    Hubal, Robert
    Crabtree, Kathleen
    Rowe, Allen
    Spriggs, Sarah
    [J]. UNMANNED SYSTEMS TECHNOLOGY XVII, 2015, 9468
  • [6] Multi-modal interaction in AAL systems
    Bianchi, Valentina
    Grossi, Ferdinando
    De Munari, Ilaria
    Ciampolini, Paolo
    [J]. EVERYDAY TECHNOLOGY FOR INDEPENDENCE AND CARE, 2011, 29 : 440 - 447
  • [7] Implementation of multi-modal interface for VR application
    Machidori, Yushi
    Takayama, Ko
    Sugita, Kaoru
    [J]. 2019 IEEE 10TH INTERNATIONAL CONFERENCE ON AWARENESS SCIENCE AND TECHNOLOGY (ICAST 2019), 2019, : 370 - 373
  • [8] Introduction to a Framework for Multi-modal and Tangible Interaction
    Lo, Kenneth W. K.
    Tang, Will W. W.
    Ngai, Grace
    Chan, Stephen C. F.
    Tse, Jason T. P.
    [J]. IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010,
  • [9] Multi-Modal Movement: Interaction and Mobility Reviewed
    Jenkings, K. Neil
    [J]. SYMBOLIC INTERACTION, 2014, 37 (02) : 315 - 317
  • [10] Towards Multi-modal Interaction with Interactive Paint
    Torres, Nicholas
    Ortega, Francisco R.
    Bernal, Jonathan
    Barreto, Armando
    Rishe, Naphtali D.
    [J]. UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: METHODS, TECHNOLOGIES, AND USERS, UAHCI 2018, PT I, 2018, 10907 : 299 - 308