Integrating Human Gaze into Attention for Egocentric Activity Recognition

被引:17
|
作者
Min, Kyle [1 ]
Corso, Jason J. [1 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
关键词
D O I
10.1109/WACV48630.2021.00111
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is well known that human gaze carries significant information about visual attention. However, there are three main difficulties in incorporating the gaze data in an attention mechanism of deep neural networks: (i) the gaze fixation points are likely to have measurement errors due to blinking and rapid eye movements; (ii) it is unclear when and how much the gaze data is correlated with visual attention; and (iii) gaze data is not always available in many real-world situations. In this work, we introduce an effective probabilistic approach to integrate human gaze into spatiotemporal attention for egocentric activity recognition. Specifically, we represent the locations of gaze fixation points as structured discrete latent variables to model their uncertainties. In addition, we model the distribution of gaze fixations using a variational method. The gaze distribution is learned during the training process so that the ground-truth annotations of gaze locations are no longer needed in testing situations since they are predicted from the learned gaze distribution. The predicted gaze locations are used to provide informative attentional cues to improve the recognition performance. Our method outperforms all the previous state-of-the-art approaches on EGTEA, which is a large-scale dataset for egocentric activity recognition provided with gaze measurements. We also perform an ablation study and qualitative analysis to demonstrate that our attention mechanism is effective.
引用
收藏
页码:1068 / 1077
页数:10
相关论文
共 50 条
  • [1] An Attention-based Activity Recognition for Egocentric Video
    Matsuo, Kenji
    Yamada, Kentaro
    Ueno, Satoshi
    Naito, Sei
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2014, : 565 - +
  • [2] Can Gaze Inform Egocentric Action Recognition?
    Zhang, Zehua
    Crandall, David
    Proulx, Michael J.
    Talathi, Sachin S.
    Sharma, Abhishek
    [J]. 2022 ACM SYMPOSIUM ON EYE TRACKING RESEARCH AND APPLICATIONS, ETRA 2022, 2022,
  • [3] Flow-guided Spatial Attention Tracking for Egocentric Activity Recognition
    Liu, Tianshan
    Lam, Kin-Man
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 4303 - 4308
  • [4] Enhanced Attention Tracking With Multi-Branch Network for Egocentric Activity Recognition
    Liu, Tianshan
    Lam, Kin-Man
    Zhao, Rui
    Kong, Jun
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (06) : 3587 - 3602
  • [5] Human Action Recognition Using Attention Mechanism and Gaze Information
    Djeddi, Chawki
    Siddiqi, Imran
    Jamil, Akhtar
    Hameed, Alaa Ali
    Kucuk, Ismail
    [J]. Communications in Computer and Information Science, 2022, 1543 CCIS
  • [6] Egocentric Activity Recognition on a Budget
    Possas, Rafael
    Caceres, Sheila Pinto
    Ramos, Fabio
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5967 - 5976
  • [7] Deep Attention Network for Egocentric Action Recognition
    Lu, Minlong
    Li, Ze-Nian
    Wang, Yueming
    Pan, Gang
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (08) : 3703 - 3713
  • [8] Learning Spatiotemporal Attention for Egocentric Action Recognition
    Lu, Minlong
    Liao, Danping
    Li, Ze-Nian
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 4425 - 4434
  • [9] Egocentric Vision for Human Activity Recognition Using Deep Learning
    Douache, Malika
    Benmoussat, Badra Nawal
    [J]. JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2023, 19 (06): : 730 - 744
  • [10] Integrating Gaze and Mouse Via Joint Cross-Attention Fusion Net for Students' Activity Recognition in E-learning
    Zhu, Rongrong
    Shi, Liang
    Song, Yunpeng
    Cai, Zhongmin
    [J]. PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2023, 7 (03):