Pointing Gestures for Human-Robot Interaction in Service Robotics: A Feasibility Study

被引:0
|
作者
Pozzi, Luca [1 ]
Gandolla, Marta [2 ]
Roveda, Loris [3 ]
机构
[1] Politecn Milan, WE COBOT Lab Polo Territoriale Lecco, Mech Dept, Lecce, Italy
[2] Politecn Milan, Mech Dept, Milan, Italy
[3] Univ Svizzera Italiana USI, Scuola Univ Profess Svizzera Italiana SUPSI, Ist Dalle Molle Studi Intelligenza Artificiale ID, Lugano, Switzerland
关键词
Human-Robot Interaction; Pointing; Service robotics; Action detection;
D O I
10.1007/978-3-031-08645-8_54
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research in service robotics strives at having a positive impact on people's quality of life by the introduction of robotic helpers for everyday activities. From this ambition arises the need of enabling natural communication between robots and ordinary people. For this reason, Human-Robot Interaction (HRI) is an extensively investigated topic, exceeding language-based exchange of information, to include all the relevant facets of communication. Each aspect of communication (e.g. hearing, sight, touch) comes with its own peculiar strengths and limits, thus they are often combined to improve robustness and naturalness. In this contribution, an HRI framework is presented, based on pointing gestures as the preferred interaction strategy. Pointing gestures are selected as they are an innate behavior to direct another attention, and thus could represent a natural way to require a service to a robot. To complement the visual information, the user could be prompted to give voice commands to resolve ambiguities and prevent the execution of unintended actions. The two layers (perceptive and semantic) architecture of the proposed HRI system is described. The perceptive layer is responsible for objects mapping, action detection, and assessment of the indicated direction. Moreover, it has to listen to uses' voice commands. To avoid privacy issues and not burden the computational resources of the robot, the interaction would be triggered by a wake-word detection system. The semantic layer receives the information processed by the perceptive layer and determines which actions are available for the selected object. The decision is based on object's characteristics, contextual information and user vocal feedbacks are exploited to resolve ambiguities. A pilot implementation of the semantic layer is detailed, and qualitative results are shown. The preliminary findings on the validity of the proposed system, as well as on the limitations of a purely vision-based approach, are discussed.
引用
收藏
页码:461 / 468
页数:8
相关论文
共 50 条
  • [1] Human-Robot Interaction Using Pointing Gestures
    Tolgyessy, Michal
    Dekan, Martin
    Hubinsky, Peter
    [J]. ISCSIC'18: PROCEEDINGS OF THE 2ND INTERNATIONAL SYMPOSIUM ON COMPUTER SCIENCE AND INTELLIGENT CONTROL, 2018,
  • [2] Pointing Gestures for Human-Robot Interaction with the Humanoid Robot Digit
    Lorentz, Viktor
    Weiss, Manuel
    Hildebrand, Kristian
    Boblan, Ivo
    [J]. 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 1886 - 1892
  • [3] Visual recognition of pointing gestures for human-robot interaction
    Nickel, Kai
    Stiefelhagen, Rainer
    [J]. IMAGE AND VISION COMPUTING, 2007, 25 (12) : 1875 - 1884
  • [4] Human-Robot Interaction Based on Gestures for Service Robots
    de Sousa, Patrick
    Esteves, Tiago
    Campos, Daniel
    Duarte, Fabio
    Santos, Joana
    Leao, Joao
    Xavier, Jose
    de Matos, Luis
    Camarneiro, Manuel
    Penas, Marcelo
    Miranda, Maria
    Silva, Ricardo
    Neves, Antonio J. R.
    Teixeira, Luis
    [J]. VIPIMAGE 2017, 2018, 27 : 700 - 709
  • [5] Pantomimic Gestures for Human-Robot Interaction
    Burke, Michael
    Lasenby, Joan
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2015, 31 (05) : 1225 - 1237
  • [6] Conversational Gestures in Human-Robot Interaction
    Bremner, Paul
    Pipe, Anthony
    Melhuish, Chris
    Fraser, Mike
    Subramanian, Sriram
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 1645 - +
  • [7] Proximity Human-Robot Interaction Using Pointing Gestures and a Wrist-mounted IMU
    Gromov, Boris
    Abbate, Gabriele
    Gambardella, Luca M.
    Giusti, Alessandro
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 8084 - 8091
  • [8] Human-robot interaction in rescue robotics
    Murphy, RR
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2004, 34 (02): : 138 - 153
  • [9] Integrating Qualitative Reasoning and Human-Robot Interaction in Domestic Service Robotics
    Schiffer, Stefan
    [J]. KUNSTLICHE INTELLIGENZ, 2016, 30 (3-4): : 257 - 265
  • [10] Visual Interpretation of Natural Pointing Gestures in 3D Space for Human-Robot Interaction
    Li, Zhi
    Jarvis, Ray
    [J]. 11TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV 2010), 2010, : 2513 - 2518