Convolutional neural networks can decode eye movement data: A black box approach to predicting task from eye movements

被引:2
|
作者
Cole, Zachary J. [1 ]
Kuntzelman, Karl M. [1 ]
Dodd, Michael D. [1 ]
Johnson, Matthew R. [1 ]
机构
[1] Univ Nebraska, Lincoln, NE 68588 USA
来源
JOURNAL OF VISION | 2021年 / 21卷 / 07期
基金
美国国家卫生研究院;
关键词
OBSERVERS TASK; YARBUS;
D O I
10.1167/jov.21.7.9
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Previous attempts to classify task from eye movement data have relied on model architectures designed to emulate theoretically defined cognitive processes and/or data that have been processed into aggregate (e.g., fixations, saccades) or statistical (e.g., fixation density) features. Black box convolutional neural networks (CNNs) are capable of identifying relevant features in raw and minimally processed data and images, but difficulty interpreting these model architectures has contributed to challenges in generalizing lab-trained CNNs to applied contexts. In the current study, a CNN classifier was used to classify task from two eye movement datasets (Exploratory and Confirmatory) in which participants searched, memorized, or rated indoor and outdoor scene images. The Exploratory dataset was used to tune the hyperparameters of the model, and the resulting model architecture was retrained, validated, and tested on the Confirmatory dataset. The data were formatted into timelines (i.e., x-coordinate, y-coordinate, pupil size) and minimally processed images. To further understand the informational value of each component of the eye movement data, the timeline and image datasets were broken down into subsets with one or more components systematically removed. Classification of the timeline data consistently outperformed the image data. The Memorize condition was most often confused with Search and Rate. Pupil size was the least uniquely informative component when compared with the x- and y-coordinates. The general pattern of results for the Exploratory dataset was replicated in the Confirmatory dataset. Overall, the present study provides a practical and reliable black box solution to classifying task from eye movement data.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Predicting Eye Fixations using Convolutional Neural Networks
    Liu, Nian
    Han, Junwei
    Zhang, Dingwen
    Wen, Shifeng
    Liu, Tianming
    [J]. 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 362 - 370
  • [2] Challenges in Interpretability of Neural Networks for Eye Movement Data
    Kumar, Ayush
    Howlader, Prantik
    Garcia, Rafael
    Weiskopf, Daniel
    Mueller, Klaus
    [J]. ETRA 2020 SHORT PAPERS: ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, 2020,
  • [3] Predicting Eye Movements in Multiple Object Tracking Using Neural Networks
    Dechterenko, Filip
    Lukavsky, Jiri
    [J]. 2016 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS (ETRA 2016), 2016, : 271 - 274
  • [4] Modeling Human Eye Movements with Neural Networks in a Maze-Solving Task
    Li, Jason
    Watters, Nicholas
    Wang, Yingting
    Sohn, Hansem
    Jazayeri, Mehrdad
    [J]. GAZE MEETS MACHINE LEARNING WORKSHOP, VOL 210, 2022, 210 : 98 - 112
  • [5] Modeling Human Eye Movements with Neural Networks in a Maze-Solving Task
    Li, Jason
    Watters, Nicholas
    Wang, Yingting
    Sohn, Hansem
    Jazayeri, Mehrdad
    [J]. Proceedings of Machine Learning Research, 2023, 210 : 98 - 112
  • [6] Relevance Prediction from Eye-movements Using Semi-interpretable Convolutional Neural Networks
    Bhattacharya, Nilavra
    Rakshit, Somnath
    Gwizdka, Jacek
    Kogut, Paul
    [J]. CHIIR'20: PROCEEDINGS OF THE 2020 CONFERENCE ON HUMAN INFORMATION INTERACTION AND RETRIEVAL, 2020, : 223 - 233
  • [7] Guiding visual attention in deep convolutional neural networks based on human eye movements
    van Dyck, Leonard Elia
    Denzler, Sebastian Jochen
    Gruber, Walter Roland
    [J]. FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [8] Predicting Self-Rated Uncertainty From Eye Movements in a Natural Task
    Sullivan, Brian
    Doughty, Hazel
    Mayol-Cuevas, Walterio
    Damen, Dima
    Ludwig, Casimir
    Gilchrist, Iain D.
    [J]. PERCEPTION, 2019, 48 : 105 - 106
  • [9] Human classifier: Observers can deduce task solely from eye movements
    Bahle, Brett
    Mills, Mark
    Dodd, Michael D.
    [J]. ATTENTION PERCEPTION & PSYCHOPHYSICS, 2017, 79 (05) : 1415 - 1425
  • [10] Human classifier: Observers can deduce task solely from eye movements
    Brett Bahle
    Mark Mills
    Michael D. Dodd
    [J]. Attention, Perception, & Psychophysics, 2017, 79 : 1415 - 1425