Visual Intention Classification by Deep Learning for Gaze-based Human-Robot Interaction

被引:0
|
作者
Shi, Lei [1 ]
Copot, Cosmin [1 ]
Vanlanduit, Steve [1 ]
机构
[1] Univ Antwerp, Op3Mech, Groenenborgerlaan 171, B-2020 Antwerp, Belgium
来源
IFAC PAPERSONLINE | 2020年 / 53卷 / 05期
关键词
deep learning; gaze intention; HRI;
D O I
10.1016/j.ifacol.2021.04.168
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we propose a deep learning model to classify a human's visual intention in gaze-based Human-Robot Interaction(HRI). We consider a scenario in which a human wears a pair of eye tracking glasses and can select an object by gaze and a robotic manipulator picks up the object. A neural network is trained as a binary classifier to classify if a human is looking at an object. The network architecture is based on Fully Convolutional Net(FCN), Convolutional Block Attention Modules(CBAM) and Residual Blocks. We evaluate our model with two experiments. In one experiment we test the performance in the scenario where only a single object exists and the other one multiple objects exist. The results show that our proposed network is accurate and it can generalize well. The F1 score on the single object is 0.971 and 0.962 on multiple objects. Copyright (C) 2020 The Authors.
引用
收藏
页码:750 / 755
页数:6
相关论文
共 50 条
  • [21] Deep Learning and Sentiment Analysis for Human-Robot Interaction
    Atzeni, Mattia
    Recupero, Diego Reforgiato
    SEMANTIC WEB: ESWC 2018 SATELLITE EVENTS, 2018, 11155 : 14 - 18
  • [22] Gaze-based Human Factors Measurements for the Evaluation of Intuitive Human-Robot Collaboration in Real-time
    Paletta, Lucas
    Pszeida, Martin
    Ganster, Harald
    Fuhrmann, Ferdinand
    Weiss, Wolfgang
    Ladstaetter, Stefan
    Dini, Amir
    Murg, Sandra
    Mayer, Harald
    Brijacak, Inka
    Reiterer, Bernhard
    2019 24TH IEEE INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2019, : 1528 - 1531
  • [23] Optimal Gaze-Based Robot Selection in Multi-Human Multi-Robot Interaction
    Zhang, Lingkang
    Vaughan, Richard
    ELEVENTH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN ROBOT INTERACTION (HRI'16), 2016, : 645 - 646
  • [24] The Importance of Mutual Gaze in Human-Robot Interaction
    Kompatsiari, Kyveli
    Tikhanoff, Vadim
    Ciardo, Francesca
    Metta, Giorgio
    Wykowska, Agnieszka
    SOCIAL ROBOTICS, ICSR 2017, 2017, 10652 : 443 - 452
  • [25] Active gaze tracking for human-robot interaction
    Atienza, R
    Zelinsky, A
    FOURTH IEEE INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, PROCEEDINGS, 2002, : 261 - 266
  • [26] Control of bidirectional physical human-robot interaction based on the human intention
    Leica, Paulo
    Roberti, Flavio
    Monllor, Matias
    Toibero, Juan M.
    Carelli, Ricardo
    INTELLIGENT SERVICE ROBOTICS, 2017, 10 (01) : 31 - 40
  • [27] Robot Gaze Behavior Affects Honesty in Human-Robot Interaction
    Schellen, Elef
    Bossi, Francesco
    Wykowska, Agnieszka
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2021, 4
  • [28] Implicit Intention Communication in Human-Robot Interaction Through Visual Behavior Studies
    Li, Songpo
    Zhang, Xiaoli
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2017, 47 (04) : 437 - 448
  • [29] A Method of Intention Estimation for Human-Robot Interaction
    Luo, Jing
    Liu, Chao
    Wang, Ning
    Yang, Chenguang
    ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS (UKCI 2019), 2020, 1043 : 69 - 80
  • [30] Human-Robot Interaction Based on Facial Expression Recognition Using Deep Learning
    Maeda, Yoichiro
    Sakai, Tensei
    Kamei, Katsuari
    Cooper, Eric W.
    2020 JOINT 11TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 21ST INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS-ISIS), 2020, : 211 - 216