Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks

被引:6
|
作者
Kim, Jung-Hwa [1 ]
Jeong, Jin-Woo [1 ]
机构
[1] Kumoh Natl Inst Technol, Dept Comp Engn, Gumi 39177, South Korea
基金
新加坡国家研究基金会;
关键词
adversarial network; deep learning; gaze estimation; low-light environment; TRACKING;
D O I
10.3390/s20174935
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user's intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considered one of the most effective indicators among recently developed interaction techniques (e.g., face orientation estimation, body tracking, and gesture recognition). Previous gaze estimation techniques, however, are known to be effective only in a controlled lab environment under normal lighting conditions. In this study, we propose a novel deep learning-based approach to achieve a successful gaze estimation under various low-light conditions, which is anticipated to be more practical for smart interaction scenarios. The proposed approach utilizes a generative adversarial network (GAN) to enhance users' eye images captured under low-light conditions, thereby restoring missing information for gaze estimation. Afterward, the GAN-recovered images are fed into the convolutional neural network architecture as input data to estimate the direction of the user gaze. Our experimental results on the modified MPIIGaze dataset demonstrate that the proposed approach achieves an average performance improvement of 4.53%-8.9% under low and dark light conditions, which is a promising step toward further research.
引用
收藏
页码:1 / 20
页数:20
相关论文
共 50 条
  • [21] Gaze estimation using convolutional neural networks
    Karmi, Rawdha
    Rahmany, Ines
    Khlifa, Nawres
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (01) : 389 - 398
  • [22] Pose estimation using polarimetric imaging in low-light environment
    Wan, Zhenhua
    Zhao, Kaichun
    Chu, Jinkui
    SEVENTH ASIA PACIFIC CONFERENCE ON OPTICS MANUFACTURE (APCOM 2021), 2022, 12166
  • [23] Low-light Image Enhancement Based on Joint Generative Adversarial Network and Image Quality Assessment
    Hua, Wei
    Xia, Youshen
    2018 11TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2018), 2018,
  • [24] Low-light image restoration using discrete cosine transform and conditional generative adversarial network
    Xu, Banglian
    Fang, Yao
    Bian, Zhixiang
    Huang, Yu
    Tan, Yaoyao
    Cheng, Xue
    Song, Jiale
    Zhang Leihong
    UKRAINIAN JOURNAL OF PHYSICAL OPTICS, 2021, 22 (04) : 225 - 238
  • [25] Single Image Low-Light Enhancement via a Dual-Path Generative Adversarial Network
    Shaoliang Yang
    Dongming Zhou
    Circuits, Systems, and Signal Processing, 2023, 42 : 4221 - 4237
  • [26] Single Image Low-Light Enhancement via a Dual-Path Generative Adversarial Network
    Yang, Shaoliang
    Zhou, Dongming
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2023, 42 (07) : 4221 - 4237
  • [27] Low-light Image Enhancement Using Chain-consistent Adversarial Networks
    Liu, Minghao
    Luo, Jiahao
    Zhang, Xiaohan
    Liu, Yang
    Davis, James
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 713 - 719
  • [28] Gaze estimation from low resolution images
    Ono, Yasuhiro
    Okabe, Takahiro
    Sato, Yoichi
    ADVANCES IN IMAGE AND VIDEO TECHNOLOGY, PROCEEDINGS, 2006, 4319 : 178 - +
  • [29] Bilevel Generative Learning for Low-Light Vision
    Liu, Yingchi
    Liu, Zhu
    Ma, Long
    Liu, Jinyuan
    Fan, Xin
    Luo, Zhongxuan
    Liu, Risheng
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 7758 - 7766
  • [30] A Hierarchical Generative Model for Eye linage Synthesis and Eye Gaze Estimation
    Wang, Kang
    Zhao, Rui
    Ji, Qiang
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 440 - 448