Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

被引:0
|
作者
Trick, Susanne [1 ,2 ]
Koert, Dorothea [2 ,3 ]
Peters, Jan [2 ,3 ,4 ]
Rothkopf, Constantin A. [1 ,2 ,5 ]
机构
[1] Tech Univ Darmstadt, Psychol Informat Proc, Darmstadt, Germany
[2] Tech Univ Darmstadt, Ctr Cognit Sci, Darmstadt, Germany
[3] Tech Univ Darmstadt, Intelligent Autonomous Syst, Darmstadt, Germany
[4] MPI Intelligent Syst, Tubingen, Germany
[5] Goethe Univ, Frankfurt Inst Adv Studies, Frankfurt, Germany
关键词
DATA FUSION;
D O I
10.26083/tuprints-00020552
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Assistive robots can potentially improve the quality of life and personal independence of elderly people by supporting everyday life activities. To guarantee a safe and intuitive interaction between human and robot, human intentions need to be recognized automatically. As humans communicate their intentions multimodally, the use of multiple modalities for intention recognition may not just increase the robustness against failure of individual modalities but especially reduce the uncertainty about the intention to be recognized. This is desirable as particularly in direct interaction between robots and potentially vulnerable humans a minimal uncertainty about the situation as well as knowledge about this actual uncertainty is necessary. Thus, in contrast to existing methods, in this work a new approach for multimodal intention recognition is introduced that focuses on uncertainty reduction through classifier fusion. For the four considered modalities speech, gestures, gaze directions and scene objects individual intention classifiers are trained, all of which output a probability distribution over all possible intentions. By combining these output distributions using the Bayesian method Independent Opinion Pool [1] the uncertainty about the intention to be recognized can be decreased. The approach is evaluated in a collaborative human-robot interaction task with a 7-DoF robot arm. The results show that fused classifiers, which combine multiple modalities, outperform the respective individual base classifiers with respect to increased accuracy, robustness, and reduced uncertainty.
引用
收藏
页码:7009 / 7016
页数:8
相关论文
共 50 条
  • [1] Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
    Zhao, Xiyuan
    Li, Huijun
    Miao, Tianyuan
    Zhu, Xianyi
    Wei, Zhikai
    Tan, Lifen
    Song, Aiguo
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7819 - 7826
  • [2] MULTIMODAL HUMAN ACTION RECOGNITION IN ASSISTIVE HUMAN-ROBOT INTERACTION
    Rodomagoulakis, I.
    Kardaris, N.
    Pitsikalis, V.
    Mavroudi, E.
    Katsamanis, A.
    Tsiami, A.
    Maragos, P.
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2702 - 2706
  • [3] Turn-Taking Intention Recognition using Multimodal Cues in Social Human-Robot Interaction
    Park, Cheonshu
    Kim, Jaehong
    Kang, Ji-Hoon
    [J]. 2017 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS), 2017, : 1300 - 1302
  • [4] Interaction Intention Recognition via Human Emotion for Human-Robot Natural Interaction
    Yang, Shengtian
    Guan, Yisheng
    Li, Yihui
    Shi, Wenjing
    [J]. 2022 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2022, : 380 - 385
  • [5] Analysing Action and Intention Recognition in Human-Robot Interaction with ANEMONE
    Alenljung, Beatrice
    Lindblom, Jessica
    [J]. HUMAN-COMPUTER INTERACTION: INTERACTION TECHNIQUES AND NOVEL APPLICATIONS, HCII 2021, PT II, 2021, 12763 : 181 - 200
  • [6] Multimodal emotion recognition with evolutionary computation for human-robot interaction
    Perez-Gaspar, Luis-Alberto
    Caballero-Morales, Santiago-Omar
    Trujillo-Romero, Felipe
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2016, 66 : 42 - 61
  • [7] Natural Grasp Intention Recognition Based on Gaze in Human-Robot Interaction
    Yang, Bo
    Huang, Jian
    Chen, Xinxing
    Li, Xiaolong
    Hasegawa, Yasuhisa
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (04) : 2059 - 2070
  • [8] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    [J]. UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [9] Feature Reduction for Dimensional Emotion Recognition in Human-Robot Interaction
    Banda, Ntombikayise
    Engelbrecht, Andries
    Robinson, Peter
    [J]. 2015 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2015, : 803 - 810
  • [10] Decision-Theoretic Planning Under Uncertainty for Multimodal Human-Robot Interaction
    Garcia, Joao A.
    Lima, Pedro U.
    Veiga, Tiago
    [J]. 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 779 - 784