Inferring Human Knowledgeability from Eye Gaze in Mobile Learning Environments

被引:4
|
作者
Celiktutan, Oya [1 ]
Demiris, Yiannis [1 ]
机构
[1] Imperial Coll London, Dept Elect & Elect Engn, Personal Robot Lab, London, England
基金
欧盟地平线“2020”;
关键词
Assistive mobile applications; Noninvasive gaze tracking; Analysis of eye movements; Human knowledgeability prediction;
D O I
10.1007/978-3-030-11024-6_13
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
What people look at during a visual task reflects an interplay between ocular motor functions and cognitive processes. In this paper, we study the links between eye gaze and cognitive states to investigate whether eye gaze reveal information about an individual's knowledgeability. We focus on a mobile learning scenario where a user and a virtual agent play a quiz game using a hand-held mobile device. To the best of our knowledge, this is the first attempt to predict user's knowledgeability from eye gaze using a noninvasive eye tracking method on mobile devices: we perform gaze estimation using front-facing camera of mobile devices in contrast to using specialised eye tracking devices. First, we define a set of eye movement features that are discriminative for inferring user's knowledgeability. Next, we train a model to predict users' knowledgeability in the course of responding to a question. We obtain a classification performance of 59.1% achieving human performance, using eye movement features only, which has implications for (1) adapting behaviours of the virtual agent to user's needs (e.g., virtual agent can give hints); (2) personalising quiz questions to the user's perceived knowledgeability.
引用
下载
收藏
页码:193 / 209
页数:17
相关论文
共 50 条
  • [31] Teleoperation of a mobile robot based on eye-gaze tracking
    Gego, Daniel
    Carreto, Carlos
    Figueiredo, Luis
    2017 12TH IBERIAN CONFERENCE ON INFORMATION SYSTEMS AND TECHNOLOGIES (CISTI), 2017,
  • [32] EM-Gaze: eye context correlation and metric learning for gaze estimation
    Jinchao Zhou
    Guoan Li
    Feng Shi
    Xiaoyan Guo
    Pengfei Wan
    Miao Wang
    Visual Computing for Industry, Biomedicine, and Art, 6
  • [33] Gaze control using human eye movements
    Spindler, F
    Chaumette, F
    1997 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION - PROCEEDINGS, VOLS 1-4, 1997, : 2258 - 2263
  • [34] 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye Gaze and Multimodality
    Nakano, Yukiko I.
    Jokinen, Kristiina
    Huang, Hung-Hsuan
    ICMI '12: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2012, : 611 - 612
  • [35] Mobile collaborative learning environments
    Quintero, Johana
    Amaya, Edixon
    REVISTA CICAG, 2016, 14 (01): : 66 - 80
  • [36] Infrastructure for mobile learning environments
    Bhatti, A
    Majewski, M
    IKE '04: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE ENGNINEERING, 2004, : 422 - 428
  • [37] Reading the mind from eye gaze
    Calder, A
    Lawrence, A
    Keane, J
    Scott, S
    Owen, A
    Christoffels, I
    Young, A
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2002, : 74 - 74
  • [38] Reading the mind from eye gaze
    Calder, AJ
    Lawrence, AD
    Keane, J
    Scott, SK
    Owen, AM
    Christoffels, I
    Young, AW
    NEUROPSYCHOLOGIA, 2002, 40 (08) : 1129 - 1138
  • [39] Distinguishing People From Eye Gaze
    Kalay, Ozem
    Sezgin, T. Metin
    2013 21ST SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2013,
  • [40] Circle Talks As Situated Experiential Learning: Context, Identity, and Knowledgeability in "Learning From Reflection"
    Seaman, Jayson
    Rheingold, Alison
    JOURNAL OF EXPERIENTIAL EDUCATION, 2013, 36 (02) : 155 - 174