Context-Aware Head-and-Eye Motion Generation with Diffusion Model

被引:0
|
作者
Shen, Yuxin [1 ]
Xu, Manjie [2 ]
Liang, Wei [1 ,2 ]
机构
[1] Beijing Inst Technol, Yangtze Delta Region Acad, Jiaxing, Peoples R China
[2] Beijing Inst Technol, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality; ATTENTION;
D O I
10.1109/VR58804.2024.00039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In humanity's ongoing quest to craft natural and realistic avatars within virtual environments, the generation of authentic eye gaze behaviors stands paramount. Eye gaze not only serves as a primary non-verbal communication cue, but it also reflects cognitive processes, intent, and attentiveness, making it a crucial element in ensuring immersive interactions. However, automatically generating these intricate gaze behaviors presents significant challenges. Traditional methods can be both time-consuming and lack the precision to align gaze behaviors with the intricate nuances of the environment in which the avatar resides. To overcome these challenges, we introduce a novel two-stage approach to generate context-aware head-and-eye motions across diverse scenes. By harnessing the capabilities of advanced diffusion models, our approach adeptly produces contextually appropriate eye gaze points, further leading to the generation of natural head-and-eye movements. Utilizing Head-Mounted Display (HMD) eye-tracking technology, we also present a comprehensive dataset, which captures human eye gaze behaviors in tandem with associated scene features. We show that our approach consistently delivers intuitive and lifelike head-and-eye motions and demonstrates superior performance in terms of motion fluidity, alignment with contextual cues, and overall user satisfaction.
引用
收藏
页码:157 / 167
页数:11
相关论文
共 50 条
  • [21] A MODEL OF CONTEXT-AWARE AGENT ARCHITECTURE
    Stoyanov, Stanimir
    Valkanov, Vladimir
    Popchev, Ivan
    Stoyanova-Doycheva, Asya
    Doychev, Emil
    COMPTES RENDUS DE L ACADEMIE BULGARE DES SCIENCES, 2014, 67 (04): : 487 - 496
  • [22] A Context-Aware Model for Smart Space
    Tian, Xianhu
    Xie, Yong
    PROCEEDINGS OF THE 2016 6TH INTERNATIONAL CONFERENCE ON MACHINERY, MATERIALS, ENVIRONMENT, BIOTECHNOLOGY AND COMPUTER (MMEBC), 2016, 88 : 845 - 850
  • [23] A Context-aware supplier selection model
    Razzazi, Mohammadreza
    Bayat, Maryam
    World Academy of Science, Engineering and Technology, 2009, 38 : 736 - 742
  • [24] A transaction model for context-aware applications
    Chen, Shaxun
    Ge, Jidong
    Tao, Xianping
    Lu, Jian
    ADVANCES IN GRID AND PERVASIVE COMPUTING, PROCEEDINGS, 2007, 4459 : 252 - +
  • [25] Context-aware Authorization Model for Smartphones
    Miraoui, Moeiz
    2021 IEEE INTERNATIONAL IOT, ELECTRONICS AND MECHATRONICS CONFERENCE (IEMTRONICS), 2021, : 1053 - 1057
  • [26] Social influence minimization based on context-aware multiple influences diffusion model
    Li, Weihua
    Bai, Quan
    Liang, Ling
    Yang, Yi
    Hu, Yuxuan
    Zhang, Minjie
    KNOWLEDGE-BASED SYSTEMS, 2021, 227 (227)
  • [27] Context-Aware Talking-Head Video Editing
    Yang, Songlin
    Wang, Wei
    Ling, Jun
    Peng, Bo
    Tan, Xu
    Dong, Jing
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 7718 - 7727
  • [28] MoVideo: Motion-Aware Video Generation with Diffusion Model
    Liang, Jingyun
    Fang, Yuchen
    Zhang, Kai
    Timofte, Radu
    Van Gool, Luc
    Ranjan, Rakesh
    COMPUTER VISION-ECCV 2024, PT XLIV, 2025, 15102 : 56 - 74
  • [29] Temporal context-aware motion-saliency detection
    Xu, Mengxi
    Wu, Xiaobin
    Ma, Zhizhong
    Wang, Ruili
    Lu, Huimin
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (01)
  • [30] An ORM Based Context Model for Context-Aware Computing
    Yogarajah, Annet Nishantha Anton
    Dharmasena, Shiluka Raveen
    Loganathan, Gobinath
    Perera, Srinath
    Balachandrasarma, Vishnuvathsasarma
    Walpola, Malaka
    CONTEXT-AWARE SYSTEMS AND APPLICATIONS (ICCASA 2016), 2017, 193 : 132 - 141