Implicit Human Intention Inference through Gaze Cues for People with Limited Motion Ability

被引:0
|
作者
Li, Songpo [1 ]
Zhang, Xiaoli [1 ]
机构
[1] Colorado Sch Mines, Dept Mech Engn, Golden, CO 80401 USA
关键词
human intention; gaze; assistive technology; independent living;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The promising assistive technologies bring the hope that enlightens the independent daily living for the elderly and disabled people. However, most modern human-machine communication means is not affordable to those people with very limited motion ability to effectively express their service requests. In the paper, we presented a novel interaction framework which can facilitate the communication between human and assistive devices. In the framework, human intention is inferred implicitly by monitoring the gaze movements. The advantage of this framework is that gaze-based communication requires very little effort from the user and most elderly and disabled people with motion impairment retain the visual capability. The architecture of the presented framework and its effectiveness were introduced and validated. The relationship between human intentions with gaze behaviors was further discussed. This work is expected to simplify the human-machine interaction, consequently enhancing the adoption of assistive technologies and the user's independence in daily living.
引用
下载
收藏
页码:257 / 262
页数:6
相关论文
共 16 条
  • [1] Gaze and motion information fusion for human intention inference
    Ravichandar, Harish Chaandar
    Kumar, Avnish
    Dani, Ashwin
    INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS, 2018, 2 (02) : 136 - 148
  • [2] Gaze Based Implicit Intention Inference with Historical Information of Visual Attention for Human-Robot Interaction
    Nie, Yujie
    Ma, Xin
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2021, PT III, 2021, 13015 : 293 - 303
  • [3] Bayesian Human Intention Inference Through Multiple Model Filtering with Gaze-based Priors
    Ravichandar, Harish Chaandar
    Kumar, Avnish
    Dani, Ashwin
    2016 19TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2016, : 2296 - 2302
  • [4] Enhancing Robotic Collaborative Tasks Through Contextual Human Motion Prediction and Intention Inference
    Laplaza, Javier
    Moreno, Francesc
    Sanfeliu, Alberto
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2024,
  • [5] Effects of Gaze on Human Behavior Prediction of Virtual Character for Intention Inference Design
    Yang, Liheng
    Sejima, Yoshihiro
    Watanabe, Tomio
    HUMAN INTERFACE AND THE MANAGEMENT OF INFORMATION, HIMI 2023, PT I, 2023, 14015 : 445 - 454
  • [6] Human Intention Inference through Interacting Multiple Model Filtering
    Ravichandar, Harish Chaandar
    Dani, Ashwin
    2015 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2015, : 220 - 225
  • [7] Other people's gaze encoded as implied motion in the human brain
    Guterstam, Arvid
    Wilterson, Andrew I.
    Wachtell, Davis
    Graziano, Michael S. A.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (23) : 13162 - 13167
  • [8] Human Intention Inference and On-Line Human Hand Motion Prediction for Human-Robot Collaboration
    Luo, Ren C.
    Mai, Licong
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 5958 - 5964
  • [9] 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments
    Li, Songpo
    Zhang, Xiaoli
    Webb, Jeremy D.
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2017, 64 (12) : 2824 - 2835
  • [10] A Study on Human Learning Ability during Classification of Motion and Colour Visual Cues and Their Combination
    Tchamova, Albena
    Dezert, Jean
    Bocheva, Nadejda
    Konstantinova, Pavlina
    Genova, Bilyana
    Stefanova, Miroslava
    CYBERNETICS AND INFORMATION TECHNOLOGIES, 2021, 21 (01) : 73 - 86