Experiments with multi-modal interfaces in a context-aware city guide

被引:0
|
作者
Bornträger, C [1 ]
Cheverst, K [1 ]
Davies, N [1 ]
Dix, A [1 ]
Friday, A [1 ]
Seitz, J [1 ]
机构
[1] Univ Lancaster, Dept Comp, Lancaster LA1 4YR, England
关键词
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years there has been considerable research into the development of mobile context-aware applications. The canonical example of such an application is the context-aware tour-guide that offers city visitors information tailored to their preferences and environment. The nature of the user interface for these applications is critical to their success. Moreover, the user interface and the nature and modality of information presented to the user impacts on many aspects of the system's overall requirements, such as screen size and network provision. Current prototypes have used a range of different interfaces developed in a largely ad-hoc fashion and there has been no systematic exploration of user preferences for information modality in mobile context-aware applications. In this paper we describe a series of experiments with multi-modal interfaces for context-aware city guides. The experiments build on our earlier research into the GUIDE system and include a series of field trials involving members of the general public. We report on the results of these experiments and extract design guidelines for the developers of future mobile context-aware applications.
引用
收藏
页码:116 / 130
页数:15
相关论文
共 50 条
  • [1] Things that see: Context-aware multi-modal interaction
    Crowley, James L.
    [J]. COGNITIVE VISION SYSTEMS: SAMPLING THE SPECTRUM OF APPROACHERS, 2006, 3948 : 183 - 198
  • [2] CONTEXT-AWARE DEEP LEARNING FOR MULTI-MODAL DEPRESSION DETECTION
    Lam, Genevieve
    Huang Dongyan
    Lin, Weisi
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3946 - 3950
  • [3] Multi-Modal Context-Aware reasoNer (CAN) at the Edge of IoT
    Rahman, Hasibur
    Rahmani, Rahim
    Kanter, Theo
    [J]. 8TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT-2017) AND THE 7TH INTERNATIONAL CONFERENCE ON SUSTAINABLE ENERGY INFORMATION TECHNOLOGY (SEIT 2017), 2017, 109 : 335 - 342
  • [4] SCATEAgent: Context-aware software agents for multi-modal travel
    Yin, M
    Griss, M
    [J]. APPLICATIONS OF AGENT TECHNOLOGY IN TRAFFIC AND TRANSPORTATION, 2005, : 69 - 84
  • [5] Adaptive Context-Aware Multi-Modal Network for Depth Completion
    Zhao, Shanshan
    Gong, Mingming
    Fu, Huan
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5264 - 5276
  • [6] Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis
    Chauhan, Dushyant Singh
    Akhtar, Md Shad
    Ekbal, Asif
    Bhattacharyya, Pushpak
    [J]. 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 5647 - 5657
  • [7] Hydra: A Personalized and Context-Aware Multi-Modal Transportation Recommendation System
    Liu, Hao
    Tong, Yongxin
    Zhang, Panpan
    Lu, Xinjiang
    Duan, Jianguo
    Xiong, Hui
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 2314 - 2324
  • [8] Context-aware multi-modal route selection service for urban computing scenarios
    Brito, Matheus
    Santos, Camilo
    Martins, Bruno S.
    Medeiros, Iago
    Seruffo, Marcos
    Cerqueira, Eduardo
    Rosario, Denis
    [J]. AD HOC NETWORKS, 2024, 161
  • [9] Social Context-aware Person Search in Videos via Multi-modal Cues
    Li, Dan
    Xu, Tong
    Zhou, Peilun
    He, Weidong
    Hao, Yanbin
    Zheng, Yi
    Chen, Enhong
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2022, 40 (03)
  • [10] Context-aware selection of multi-modal conversational fillers in human-robot dialogues
    Galle, Matthias
    Kynev, Ekaterina
    Monet, Nicolas
    Legras, Christophe
    [J]. 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 317 - 322