Natural Grasp Intention Recognition Based on Gaze in Human-Robot Interaction

被引:8
|
作者
Yang, Bo [1 ]
Huang, Jian [1 ]
Chen, Xinxing [2 ,3 ]
Li, Xiaolong [1 ]
Hasegawa, Yasuhisa [4 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automation, Key Lab Image Proc & Intelligent Control, Wuhan 430074, Peoples R China
[2] Shenzhen Key Lab Biomimet Robot & Intelligent Syst, Shenzhen 518055, Peoples R China
[3] Rehabil Robot Univ, Southern Univ Sci & Technol, Guangdong Prov Key Lab Human Augmentat, Shenzhen 518055, Peoples R China
[4] Nagoya Univ, Dept Micronano Mech Sci & Engn, Furo cho Chikusa ku, Nagoya 4648603, Japan
基金
中国国家自然科学基金;
关键词
Grasp intention recognition; gaze movement modeling; human-robot interaction; feature extraction; EYE-MOVEMENTS; PREDICTION; VISION;
D O I
10.1109/JBHI.2023.3238406
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Objective: While neuroscience research has established a link between vision and intention, studies on gaze data features for intention recognition are absent. The majority of existing gaze-based intention recognition approaches are based on deliberate long-term fixation and suffer from insufficient accuracy. In order to address the lack of features and insufficient accuracy in previous studies, the primary objective of this study is to suppress noise from human gaze data and extract useful features for recognizing grasp intention. Methods: We conduct gaze movement evaluation experiments to investigate the characteristics of gaze motion. The target-attracted gaze movement model (TAGMM) is proposed as a quantitative description of gaze movement based on the findings. A Kalman filter (KF) is used to reduce the noise in the gaze data based on TAGMM. We conduct gaze-based natural grasp intention recognition evaluation experiments to collect the subject's gaze data. Four types of features describing gaze point dispersion (f(var)), gaze point movement (f(gm)), head movement (f(hm)), and distance from the gaze points to objects (f(dj)) are then proposed to recognize the subject's grasp intentions. With the proposed features, we perform intention recognition experiments, employing various classifiers, and the results are compared with different methods. Results: The statistical analysis reveals that the proposed features differ significantly across intentions, offering the possibility of employing these features to recognize grasp intentions. We demonstrated the intention recognition performance utilizing the TAGMM and the proposed features in within-subject and cross-subject experiments. The results indicate that the proposed method can recognize the intention with accuracy improvements of 44.26% (within-subject) and 30.67% (cross-subject) over the fixation-based method. The proposed method also consumes less time (34.87 ms) to recognize the intention than the fixation-based method (about 1 s). Conclusion: This work introduces a novel TAGMM for modeling gaze movement and a variety of practical features for recognizing grasp intentions. Experiments confirm the effectiveness of our approach. Significance: The proposed TAGMM is capable of modeling gaze movements and can be utilized to process gaze data, and the proposed features can reveal the user's intentions. These results contribute to the development of gaze-based human-robot interaction.
引用
收藏
页码:2059 / 2070
页数:12
相关论文
共 50 条
  • [1] Gaze-Based Intention Recognition for Human-Robot Collaboration
    Belcamino, Valerio
    Takase, Miwa
    Kilina, Mariya
    Carfi, Alessandro
    Shimada, Akira
    Shimizu, Sota
    Mastrogiovanni, Fulvio
    [J]. PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ADVANCED VISUAL INTERFACES, AVI 2024, 2024,
  • [2] Interaction Intention Recognition via Human Emotion for Human-Robot Natural Interaction
    Yang, Shengtian
    Guan, Yisheng
    Li, Yihui
    Shi, Wenjing
    [J]. 2022 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2022, : 380 - 385
  • [3] Towards Natural and Intuitive Human-Robot Collaboration based on Goal-Oriented Human Gaze Intention Recognition
    Lim, Taeyhang
    Lee, Joosun
    Kim, Wansoo
    [J]. 2023 SEVENTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING, IRC 2023, 2023, : 115 - 120
  • [4] GazeEMD: Detecting Visual Intention in Gaze-Based Human-Robot Interaction
    Shi, Lei
    Copot, Cosmin
    Vanlanduit, Steve
    [J]. ROBOTICS, 2021, 10 (02)
  • [5] Using Gaze Patterns to Infer Human Intention for Human-Robot Interaction
    Li, Kang
    Wu, Jinting
    Zhao, Xiaoguang
    Tan, Min
    [J]. 2018 13TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2018, : 933 - 938
  • [6] Visual Intention Classification by Deep Learning for Gaze-based Human-Robot Interaction
    Shi, Lei
    Copot, Cosmin
    Vanlanduit, Steve
    [J]. IFAC PAPERSONLINE, 2020, 53 (05): : 750 - 755
  • [7] Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
    Zhao, Xiyuan
    Li, Huijun
    Miao, Tianyuan
    Zhu, Xianyi
    Wei, Zhikai
    Tan, Lifen
    Song, Aiguo
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7819 - 7826
  • [8] Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction
    Trick, Susanne
    Koert, Dorothea
    Peters, Jan
    Rothkopf, Constantin A.
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 7009 - 7016
  • [9] Gaze Based Implicit Intention Inference with Historical Information of Visual Attention for Human-Robot Interaction
    Nie, Yujie
    Ma, Xin
    [J]. INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2021, PT III, 2021, 13015 : 293 - 303
  • [10] Analysing Action and Intention Recognition in Human-Robot Interaction with ANEMONE
    Alenljung, Beatrice
    Lindblom, Jessica
    [J]. HUMAN-COMPUTER INTERACTION: INTERACTION TECHNIQUES AND NOVEL APPLICATIONS, HCII 2021, PT II, 2021, 12763 : 181 - 200