Optimum object selection methods for spontaneous gaze-based interaction with linear and circular trajectories

被引:2
|
作者
Nurlatifa, Hafzatin [1 ]
Hartanto, Rudy [1 ]
Ataka, Ahmad [1 ]
Wibirama, Sunu [1 ]
机构
[1] Univ Gadjah Mada, Fac Engn, Dept Elect & Informat Engn, Yogyakarta 55281, Indonesia
关键词
Touchless technology; Interactive application; Gaze tracking; Events detection;
D O I
10.1016/j.rineng.2024.101769
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The demand for touchless technologies has increased since the outbreak of Covid-19. Spontaneous gazebased interaction is a promising technique because it offers a natural approach for object selection using eye movements. There are two types of object trajectory in spontaneous gaze -based interaction: linear and circular trajectories. However, previous studies did not consider the optimum object selection method by taking into account the characteristics of each trajectory. In addition, previous studies suffered from high missed detection that caused spontaneous gaze -based applications to be less accurate and responsive. To address these scientific gaps, we conducted a novel study to propose optimum methods for object selection in linear and circular trajectories. The experimental results suggested that each trajectory requires a different combination of events detection and object selection techniques to achieve optimum performance. We recommend 2D Correlation as the object selection technique. In addition, we also recommend the Velocity and Movement Pattern Identification (I-VMP) and the Hidden Markov Model (HMM) as events detection techniques in linear and circular trajectories, respectively. The proposed approaches solved the missed detection problem and significantly increased the accuracy of gaze -based object selection to 95.60% and 99.73% on linear and circular trajectories, respectively. In the future, our discoveries are promising for the development of accurate, responsive, and seamless gaze -based touchless applications.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] LAIF: A logging and interaction framework for gaze-based interfaces in virtual entertainment environments
    Nacke, Lennart E.
    Stellmach, Sophie
    Sasse, Dennis
    Niesenhaus, Joerg
    Dachselt, Raimund
    ENTERTAINMENT COMPUTING, 2011, 2 (04) : 265 - 273
  • [42] The visual array task: A novel gaze-based measure of object label and category knowledge
    Hauschild, Kathryn M.
    Pomales-Ramos, Anamiguel
    Strauss, Mark S.
    DEVELOPMENTAL SCIENCE, 2021, 24 (02)
  • [43] Gaze-Based Interaction Adaptation for People with Involuntary Head Movements (Student Abstract)
    Tong, Cindy
    Chan, Rosanna
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23669 - 23670
  • [44] Object Search Framework based on Gaze Interaction
    Ratsamee, Photchara
    Mae, Yasushi
    Kamiyama, Kazuto
    Horade, Mitsuhiro
    Kojima, Masaru
    Kiyokawa, Kiyoshi
    Mashita, Tomohiro
    Kuroda, Yoshihiro
    Takemura, Haruo
    Arai, Tatsuo
    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2015, : 1997 - 2002
  • [45] GazeCast: Using Mobile Devices to Allow Gaze-based Interaction on Public Displays
    Namnakani, Omar
    Sinrattanavong, Penpicha
    Abdrabou, Yasmeen
    Bulling, Andreas
    Alt, Florian
    Khamis, Mohamed
    ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, ETRA 2023, 2023,
  • [46] OrthoGaze: Gaze-based three-dimensional object manipulation using orthogonal planes
    Liu, Chang
    Plopski, Alexander
    Orlosky, Jason
    COMPUTERS & GRAPHICS-UK, 2020, 89 : 1 - 10
  • [47] GazeEMD: Detecting Visual Intention in Gaze-Based Human-Robot Interaction
    Shi, Lei
    Copot, Cosmin
    Vanlanduit, Steve
    ROBOTICS, 2021, 10 (02)
  • [48] Evaluating requirements for gaze-based interaction in a see-through head mounted display
    Graupner, Sven-Thomas
    Heubner, Michael
    Pannasch, Sebastian
    Velichkovsky, Boris M.
    PROCEEDINGS OF THE EYE TRACKING RESEARCH AND APPLICATIONS SYMPOSIUM (ETRA 2008), 2008, : 91 - 94
  • [49] Visual Intention Classification by Deep Learning for Gaze-based Human-Robot Interaction
    Shi, Lei
    Copot, Cosmin
    Vanlanduit, Steve
    IFAC PAPERSONLINE, 2020, 53 (05): : 750 - 755
  • [50] Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment
    Stavridis, Konstantinos
    Psaltis, Athanasios
    Dimou, Anastasios
    Papadopoulos, Georgios Th
    Daras, Petros
    2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,