MULTIMODAL HUMAN ACTION RECOGNITION IN ASSISTIVE HUMAN-ROBOT INTERACTION

被引:0
|
作者
Rodomagoulakis, I. [1 ]
Kardaris, N. [1 ]
Pitsikalis, V. [1 ]
Mavroudi, E. [1 ]
Katsamanis, A. [1 ]
Tsiami, A. [1 ]
Maragos, P. [1 ]
机构
[1] Natl Tech Univ Athens, Sch ECE, GR-15773 Athens, Greece
关键词
multimodal sensor processing; assistive robotics; speech recognition; action-gesture recognition;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Within the context of assistive robotics we develop an intelligent interface that provides multimodal sensory processing capabilities for human action recognition. Human action is considered in multimodal terms, containing inputs such as audio from microphone arrays, and visual inputs from high definition and depth cameras. Exploring state-of-the-art approaches from automatic speech recognition, and visual action recognition, we multimodally recognize actions and commands. By fusing the unimodal information streams, we obtain the optimum multimodal hypothesis which is to be further exploited by the active mobility assistance robot in the framework of the MOBOT EU research project. Evidence from recognition experiments shows that by integrating multiple sensors and modalities, we increase multimodal recognition performance in the newly acquired challenging dataset involving elderly people while interacting with the assistive robot.
引用
收藏
页码:2702 / 2706
页数:5
相关论文
共 50 条
  • [1] A Multimodal Human-Robot Interaction Manager for Assistive Robots
    Abbasi, Bahareh
    Monaikul, Natawut
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6756 - 6762
  • [2] Human-robot collaborative interaction with human perception and action recognition
    Yu, Xinyi
    Zhang, Xin
    Xu, Chengjun
    Ou, Linlin
    [J]. NEUROCOMPUTING, 2024, 563
  • [3] MULTIMODAL SIGNAL PROCESSING AND LEARNING ASPECTS OF HUMAN-ROBOT INTERACTION FOR AN ASSISTIVE BATHING ROBOT
    Zlatintsi, A.
    Rodomagoulakis, I.
    Koutras, P.
    Dometios, A. C.
    Pitsikalis, V.
    Tzafestas, C. S.
    Maragos, P.
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 3171 - 3175
  • [4] Multimodal emotion recognition with evolutionary computation for human-robot interaction
    Perez-Gaspar, Luis-Alberto
    Caballero-Morales, Santiago-Omar
    Trujillo-Romero, Felipe
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2016, 66 : 42 - 61
  • [5] Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction
    Trick, Susanne
    Koert, Dorothea
    Peters, Jan
    Rothkopf, Constantin A.
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 7009 - 7016
  • [6] Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
    Zhao, Xiyuan
    Li, Huijun
    Miao, Tianyuan
    Zhu, Xianyi
    Wei, Zhikai
    Tan, Lifen
    Song, Aiguo
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7819 - 7826
  • [7] HARMONIC: A multimodal dataset of assistive human-robot collaboration
    Newman, Benjamin A.
    Aronson, Reuben M.
    Srinivasa, Siddhartha S.
    Kitani, Kris
    Admoni, Henny
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2022, 41 (01): : 3 - 11
  • [8] Assistive Robots for Healthcare and Human-Robot Interaction
    D'Onofrio, Grazia
    Sancarlo, Daniele
    [J]. SENSORS, 2023, 23 (04)
  • [9] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    [J]. UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [10] Human Posture Recognition for Human-Robot Interaction
    Wei, Shiheng
    Jiang, Wei
    [J]. 2011 3RD WORLD CONGRESS IN APPLIED COMPUTING, COMPUTER SCIENCE, AND COMPUTER ENGINEERING (ACC 2011), VOL 4, 2011, 4 : 305 - 310