Appearance-Based Gaze Estimation Using Dilated-Convolutions

被引:53
|
作者
Chen, Zhaokang [1 ]
Shi, Bertram E. [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Kowloon, Hong Kong, Peoples R China
来源
关键词
Appearance-based gaze estimation; Dilated-convolutions;
D O I
10.1007/978-3-030-20876-9_20
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Appearance-based gaze estimation has attracted more and more attention because of its wide range of applications. The use of deep convolutional neural networks has improved the accuracy significantly. In order to improve the estimation accuracy further, we focus on extracting better features from eye images. Relatively large changes in gaze angles may result in relatively small changes in eye appearance. We argue that current architectures for gaze estimation may not be able to capture such small changes, as they apply multiple pooling layers or other downsampling layers so that the spatial resolution of the high-level layers is reduced significantly. To evaluate whether the use of features extracted at high resolution can benefit gaze estimation, we adopt dilated-convolutions to extract high-level features without reducing spatial resolution. In cross-subject experiments on the Columbia Gaze dataset for eye contact detection and the MPIIGaze dataset for 3D gaze vector regression, the resulting Dilated-Nets achieve significant (up to 20.8%) gains when compared to similar networks without dilated-convolutions. Our proposed Dilated-Net achieves state-of-the-art results on both the Columbia Gaze and the MPIIGaze datasets.
引用
收藏
页码:309 / 324
页数:16
相关论文
共 50 条
  • [31] A Coarse-to-Fine Adaptive Network for Appearance-Based Gaze Estimation
    Cheng, Yihua
    Huang, Shiyao
    Wang, Fei
    Qian, Chen
    Lu, Feng
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 10623 - 10630
  • [32] Appearance-Based Gaze Estimation With Online Calibration From Mouse Operations
    Sugano, Yusuke
    Matsushita, Yasuyuki
    Sato, Yoichi
    Koike, Hideki
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2015, 45 (06) : 750 - 760
  • [33] PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure Multi-Party Computation
    Elfares, Mayar
    Reisert, Pascal
    Hu, Zhiming
    Tang, Wenwu
    Küsters, Ralf
    Bulling, Andreas
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2024, 8 (ETRA)
  • [34] Learning to Personalize in Appearance-Based Gaze Tracking
    Linden, Erik
    Sjortrand, Jonas
    Proutiere, Alexandre
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1140 - 1148
  • [35] Appearance-Based Gaze Estimation via Evaluation-Guided Asymmetric Regression
    Cheng, Yihua
    Lu, Feng
    Zhang, Xucong
    [J]. COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 : 105 - 121
  • [36] Learning-by-Synthesis for Appearance-based 3D Gaze Estimation
    Sugano, Yusuke
    Matsushita, Yasuyuki
    Sato, Yoichi
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 1821 - 1828
  • [37] Appearance-Based Gaze Estimation as a Benchmark for Eye Image Data Generation Methods
    Katrychuk, Dmytro
    Komogortsev, Oleg V.
    [J]. Applied Sciences (Switzerland), 2024, 14 (20):
  • [38] TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets
    Huang, Qiong
    Veeraraghavan, Ashok
    Sabharwal, Ashutosh
    [J]. MACHINE VISION AND APPLICATIONS, 2017, 28 (5-6) : 445 - 461
  • [39] Appearance-based Gaze Estimation with Multi-Modal Convolutional Neural Networks
    Wang, Fei
    Wang, Yan
    Li, Teng
    [J]. INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND ROBOTICS 2021, 2021, 11884
  • [40] Free-Head Appearance-Based Eye Gaze Estimation on Mobile Devices
    Jigang, Liu
    Lee, Bu Sung
    Rajan, Deepu
    [J]. 2019 1ST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION (ICAIIC 2019), 2019, : 232 - 237