Experiments with multi-modal interfaces in a context-aware city guide

被引:0
|
作者
Bornträger, C [1 ]
Cheverst, K [1 ]
Davies, N [1 ]
Dix, A [1 ]
Friday, A [1 ]
Seitz, J [1 ]
机构
[1] Univ Lancaster, Dept Comp, Lancaster LA1 4YR, England
关键词
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years there has been considerable research into the development of mobile context-aware applications. The canonical example of such an application is the context-aware tour-guide that offers city visitors information tailored to their preferences and environment. The nature of the user interface for these applications is critical to their success. Moreover, the user interface and the nature and modality of information presented to the user impacts on many aspects of the system's overall requirements, such as screen size and network provision. Current prototypes have used a range of different interfaces developed in a largely ad-hoc fashion and there has been no systematic exploration of user preferences for information modality in mobile context-aware applications. In this paper we describe a series of experiments with multi-modal interfaces for context-aware city guides. The experiments build on our earlier research into the GUIDE system and include a series of field trials involving members of the general public. We report on the results of these experiments and extract design guidelines for the developers of future mobile context-aware applications.
引用
收藏
页码:116 / 130
页数:15
相关论文
共 50 条
  • [11] Owens Luis - A Context-aware Multi-modal Smart Office Chair in an Ambient Environment
    Kiyokawa, Kiyoshi
    Hatanaka, Masahide
    Hosoda, Kazufumi
    Okada, Masashi
    Shigeta, Hironori
    Ishihara, Yasunori
    Ooshita, Fukuhito
    Kakugawa, Hirotsugu
    Kurihara, Satoshi
    Moriyama, Koichi
    [J]. IEEE VIRTUAL REALITY CONFERENCE 2012 PROCEEDINGS, 2012,
  • [12] Context-Aware Multi-modal Transportation Recommendation Based on Particle Swarm Optimization and LightGBM
    Sun Q.-M.
    Qu Z.-J.
    Ren C.-G.
    [J]. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2021, 49 (05): : 894 - 903
  • [13] Incorporating Multi-Source Urban Data for Personalized and Context-Aware Multi-Modal Transportation Recommendation
    Liu, Hao
    Tong, Yongxin
    Han, Jindong
    Zhang, Panpan
    Lu, Xinjiang
    Xiong, Hui
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (02) : 723 - 735
  • [14] A multi-modal context-aware sequence stage validation for human-centric AR assembly
    Fang, Wei
    Zhang, Tienong
    Wang, Zeyu
    Ding, Ji
    [J]. COMPUTERS & INDUSTRIAL ENGINEERING, 2024, 194
  • [15] Context-Aware Inductive Bias Learning for Vessel Border Detection in Multi-modal Intracoronary Imaging
    Gao, Zhifan
    Li, Shuo
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II, 2019, 11765 : 776 - 784
  • [16] Context-Aware Multi-modal Traffic Management in ITS: A Q-Learning based Algorithm
    Said, Adel Mounir
    Soua, Ahmed
    Abd-Elrahman, Emad
    Afifi, Hossam
    [J]. 2015 INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2015, : 674 - 679
  • [17] Context-Aware Pervasive Interfaces
    Riboni, Daniele
    [J]. IEEE INTERNET COMPUTING, 2015, 19 (04) : 68 - 72
  • [18] Multi-modal robot interfaces
    [J]. Springer Tracts in Advanced Robotics, 2005, 14 : 5 - 7
  • [19] Context-Aware Adaptation of User Interfaces
    Motti, Vivian Genaro
    Vanderdonckt, Jean
    [J]. HUMAN-COMPUTER INTERACTION - INTERACT 2011, PT IV, 2011, 6949 : 700 - 701
  • [20] HARWE: A multi-modal large-scale dataset for context-aware human activity recognition in smart working environments
    Esmaeilzehi, Alireza
    Khazaei, Ensieh
    Wang, Kai
    Kalsi, Navjot Kaur
    Ng, Pai Chet
    Liu, Huan
    Yu, Yuanhao
    Hatzinakos, Dimitrios
    Plataniotis, Konstantinos
    [J]. PATTERN RECOGNITION LETTERS, 2024, 184 : 126 - 132