Multi-modal Proactive Approaching of Humans for Human-Robot Cooperative Tasks

被引:3
|
作者
Naik, Lakshadeep [1 ]
Palinko, Oskar [1 ]
Bodenhagen, Leon [1 ]
Krueger, Norbert [1 ]
机构
[1] Univ Southern Denmark, Fac Engn, Maersk Mc Kinney Mollar Inst MMMI, SDU Robot, Campusvej 55, Odense M, Denmark
关键词
NAVIGATION; MOTION;
D O I
10.1109/RO-MAN50785.2021.9515475
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a method for proactive approaching of humans for human-robot cooperative tasks such as a robot serving beverages to people. The proposed method can deal robustly with the uncertainties in the robot's perception while also ensuring socially acceptable behavior. We use multiple modalities in the form of the robot's motion, body orientation, speech and gaze to proactively approach humans. Further, we present a behavior tree based control architecture to efficiently integrate these different modalities. The proposed method was successfully integrated and tested on a beverage serving robot. We present the findings of our experiments and discuss possible extensions to address limitations.
引用
收藏
页码:323 / 329
页数:7
相关论文
共 50 条
  • [1] Multi-modal anchoring for human-robot interaction
    Fritsch, J
    Kleinehagenbrock, M
    Lang, S
    Plötz, T
    Fink, GA
    Sagerer, G
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2003, 43 (2-3) : 133 - 147
  • [2] Planning of proactive behaviors for human-robot cooperative tasks under uncertainty
    Kwon, Woo Young
    Suh, Il Hong
    [J]. KNOWLEDGE-BASED SYSTEMS, 2014, 72 : 81 - 95
  • [3] Multi-modal interfaces for natural Human-Robot Interaction
    Andronas, Dionisis
    Apostolopoulos, George
    Fourtakas, Nikos
    Makris, Sotiris
    [J]. 10TH CIRP SPONSORED CONFERENCE ON DIGITAL ENTERPRISE TECHNOLOGIES (DET 2020) - DIGITAL TECHNOLOGIES AS ENABLERS OF INDUSTRIAL COMPETITIVENESS AND SUSTAINABILITY, 2021, 54 : 197 - 202
  • [4] Multi-modal Language Models for Human-Robot Interaction
    Janssens, Ruben
    [J]. COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 109 - 111
  • [5] Human-Robot Interaction with Multi-Human Social Pattern Inference on a Multi-Modal Robot
    Tseng, Shih-Huan
    Wu, Tung-Yen
    Cheng, Ching-Ying
    Fu, Li-Chen
    [J]. 2014 14TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2014), 2014, : 819 - 824
  • [6] Multi-Modal Interfaces for Human-Robot Communication in Collaborative Assembly
    Horvath, Gergely
    Kardos, Csaba
    Kemeny, Zsolt
    Kovacs, Andras
    Pataki, Balazs E.
    Vancza, Jozsef
    [J]. ERCIM NEWS, 2018, (114): : 15 - 16
  • [7] Continuous Multi-Modal Interaction Causes Human-Robot Alignment
    Wallkotter, Sebastian
    Joannou, Michael
    Westlake, Samuel
    Belphaeme, Tony
    [J]. PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON HUMAN AGENT INTERACTION (HAI'17), 2017, : 375 - 379
  • [8] Design of Robot Teaching Assistants Through Multi-modal Human-Robot Interactions
    Ferrarelli, Paola
    Lazaro, Maria T.
    Iocchi, Luca
    [J]. ROBOTICS IN EDUCATION: LATEST RESULTS AND DEVELOPMENTS, 2018, 630 : 274 - 286
  • [9] Generation method of a robot task program for multi-modal human-robot interface
    Aramaki, Shigeto
    Nagai, Tatsuichirou
    Yayoshi, Koutarou
    Tsuruoka, Tomoaki
    Kawamura, Masato
    Kurono, Shigeru
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY, VOLS 1-6, 2006, : 1450 - +
  • [10] Bidirectional Multi-modal Signs of Checking Human-Robot Engagement and Interaction
    Umberto Maniscalco
    Pietro Storniolo
    Antonio Messina
    [J]. International Journal of Social Robotics, 2022, 14 : 1295 - 1309