Towards Enhanced Context Awareness with Vision-based Multimodal Interfaces

被引:0
|
作者
Hu, Yongquan [1 ]
Hu, Wen [1 ]
Quigley, Aaron [2 ]
机构
[1] UNSW, Sch Comp Sci & Engn, Sydney, NSW, Australia
[2] CSIRO, Data61, Canberra, ACT, Australia
关键词
Context Awareness; Multimodality; Vision-based Interface; Ambient Intelligence;
D O I
10.1145/3640471.3686646
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision-based Interfaces (VIs) are pivotal in advancing Human-Computer Interaction (HCI), particularly in enhancing context awareness. However, there are significant opportunities for these interfaces due to rapid advancements in multimodal Artificial Intelligence (AI), which promise a future of tight coupling between humans and intelligent systems. AI-driven VIs, when integrated with other modalities, offer a robust solution for effectively capturing and interpreting user intentions and complex environmental information, thereby facilitating seamless and efficient interactions. This PhD study explores three application cases of multimodal interfaces to augment context awareness, respectively focusing on three dimensions of visual modality: scale, depth, and time: a finegrained analysis of physical surfaces via microscopic image, precise projection of the real world using depth data, and rendering haptic feedback from video background in virtual environments.
引用
收藏
页数:3
相关论文
共 50 条
  • [1] Towards robust intuitive vision-based user interfaces
    Schreer, Oliver
    Eisert, Peter
    Kauff, Peter
    Tanger, Ralf
    Englert, Roman
    2006 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO - ICME 2006, VOLS 1-5, PROCEEDINGS, 2006, : 69 - +
  • [2] Towards reliable computer vision-based tangible user interfaces
    Mbogho, Audrey J. W.
    Scarlatos, Lori L.
    PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION, 2005, : 155 - 160
  • [3] Vision-based interfaces for mobility
    Kölsch, M
    Turk, M
    Höllerer, T
    PROCEEDINGS OF MOBIQUITOUS 2004, 2004, : 86 - 94
  • [4] Towards human friendly robots: Vision-based interfaces and safe mechanisms
    Zelinsky, A
    Matsumoto, Y
    Heinzmann, J
    Newman, R
    EXPERIMENTAL ROBOTICS VI, 2000, 250 : 487 - 498
  • [5] Hand Gesture Recognition in Real Time for Automotive Interfaces: A Multimodal Vision-Based Approach and Evaluations
    Ohn-Bar, Eshed
    Trivedi, Mohan Manubhai
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2014, 15 (06) : 2368 - 2377
  • [6] Dynamically reconfigurable vision-based user interfaces
    Rick Kjeldsen
    Anthony Levas
    Claudio Pinhanez
    Machine Vision and Applications, 2004, 16 : 6 - 12
  • [7] Vision-Based Interfaces Applied to Assistive Robots
    Perez, Elisa
    Soria, Carlos
    Lopez, Natalia M.
    Nasisi, Oscar
    Mut, Vicente
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2013, 10
  • [8] Dynamically reconfigurable vision-based user interfaces
    Kjeldsen, R
    Levas, A
    Pinhanez, C
    MACHINE VISION AND APPLICATIONS, 2004, 16 (01) : 6 - 12
  • [9] Dynamically reconfigurable vision-based user interfaces
    Kjeldsen, R
    Levas, A
    Pinhanez, C
    COMPUTER VISION SYSTEMS, PROCEEDINGS, 2003, 2626 : 323 - 332
  • [10] Vision-based user interfaces: methods and applications
    Porta, M
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2002, 57 (01) : 27 - 73