Towards Eyeglasses Refraction in Appearance-based Gaze Estimation

被引:0
|
作者
Lyu, Junfeng [1 ]
Xu, Feng
机构
[1] Tsinghua Univ, Sch Software, Beijing, Peoples R China
基金
国家重点研发计划; 北京市自然科学基金;
关键词
Computing methodologies; Gaze estimation; Eyeglasses refraction; Multi-task learning;
D O I
10.1109/ISMAR59233.2023.00084
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
For myopia and hyperopia subjects, eyeglasses would change the position of objects in their views, leading to different eyeball rotations for the same gaze target (Fig. 1). Existing appearance-based gaze estimation methods ignore this effect, while this paper investigates it and proposes an effective method to consider it in gaze estimation, achieving noticeable improvements. Specifically, we discover that the appearance-gaze mapping differs for spectacled and unspectacled conditions, and the deviations are nearly consistent with the physical laws of the ideal lens. Based on this discovery, we propose a novel multi-task training strategy that encourages networks to regress gaze and classify the wearing conditions simultaneously. We apply the proposed strategy to some popular methods, including supervised and unsupervised ones, and evaluate them on different datasets with various backbones. The results show that the multi-task training strategy could be used on the existing methods to improve the performance of gaze estimation. To the best of our knowledge, we are the first to clearly reveal and explicitly consider eyeglasses refraction in appearance-based gaze estimation. Data and code are available at https://github.com/StoryMY/RefractionGaze.
引用
收藏
页码:693 / 702
页数:10
相关论文
共 50 条
  • [21] Appearance-Based Gaze Estimation Using Dilated-Convolutions
    Chen, Zhaokang
    Shi, Bertram E.
    COMPUTER VISION - ACCV 2018, PT VI, 2019, 11366 : 309 - 324
  • [22] Appearance-based Gaze Estimation using Attention and Difference Mechanism
    Murthy, L. R. D.
    Biswas, Pradipta
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3137 - 3146
  • [23] Iris Geometric Transformation Guided Deep Appearance-Based Gaze Estimation
    Nie, Wei
    Wang, Zhiyong
    Ren, Weihong
    Zhang, Hanlin
    Liu, Honghai
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1616 - 1631
  • [24] A Head Pose-free Approach for Appearance-based Gaze Estimation
    Lu, Feng
    Okabe, Takahiro
    Sugano, Yusuke
    Sato, Yoichi
    PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2011, 2011,
  • [25] Improving Domain Generalization in Appearance-Based Gaze Estimation With Consistency Regularization
    Back, Moon-Ki
    Yoo, Cheol-Hwan
    Yoo, Jang-Hee
    IEEE ACCESS, 2023, 11 : 137948 - 137956
  • [26] Evaluating the Robustness of an Appearance-based Gaze Estimation Method for Multimodal Interfaces
    Li, Nanxiang
    Busso, Carlos
    ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 91 - 98
  • [27] MPIIGaze: Real World Dataset and Deep Appearance-Based Gaze Estimation
    Zhang, Xucong
    Sugano, Yusuke
    Fritz, Mario
    Bulling, Andreas
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (01) : 162 - 175
  • [28] A Coarse-to-Fine Adaptive Network for Appearance-Based Gaze Estimation
    Cheng, Yihua
    Huang, Shiyao
    Wang, Fei
    Qian, Chen
    Lu, Feng
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 10623 - 10630
  • [29] Distraction Detection in Automotive Environment using Appearance-based Gaze Estimation
    Murthy, L. R. D.
    Mukhopadhyay, Abhishek
    Biswas, Pradipta
    COMPANION PROCEEDINGS OF THE 27TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2022 COMPANION, 2022, : 38 - 41
  • [30] Appearance-Based Gaze Estimation With Online Calibration From Mouse Operations
    Sugano, Yusuke
    Matsushita, Yasuyuki
    Sato, Yoichi
    Koike, Hideki
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2015, 45 (06) : 750 - 760