Interacting with robots via speech and gestures, an integrated architecture

被引:0
|
作者
Cutugno, Francesco [1 ]
Finzi, Alberto [1 ]
Fiore, Michelangelo [1 ]
Leone, Enrico [1 ]
Rossi, Silvia [1 ]
机构
[1] Univ Naples Federico II, Dept Elect Engn & Informat Technol, Naples, Italy
关键词
speech commands; gesture; multimodal interaction; RECOGNITION; FUSION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Effective human-robot communication is one of the main concerns in modern robotics. Involved systems should be very robust, allowing little chance for misunderstanding users commands. The main purpose of this work is to develop a general framework for multimodal human-robot communication, which allows users to interact with robots using speech and gestures, integrated into unique commands. The produced architecture relies on the definition of different modules separately analysing the low level inputs and presenting a further fusion module able to extract semantics from these multiple channels. In this paper, we introduce our general approach and provide a case study where gesture and speech modalities are combined.
引用
收藏
页码:3694 / 3698
页数:5
相关论文
共 50 条
  • [1] Talk, voice and gestures in reported speech: toward an integrated approach
    Soulaimani, Dris
    [J]. DISCOURSE STUDIES, 2018, 20 (03) : 361 - 376
  • [2] Towards Culture-Aware Co-Speech Gestures for Social Robots
    Ariel Gjaci
    Carmine Tommaso Recchiuto
    Antonio Sgorbissa
    [J]. International Journal of Social Robotics, 2022, 14 : 1493 - 1506
  • [3] Towards Culture-Aware Co-Speech Gestures for Social Robots
    Gjaci, Ariel
    Recchiuto, Carmine Tommaso
    Sgorbissa, Antonio
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2022, 14 (06) : 1493 - 1506
  • [4] Constrained vs spontaneous speech and gestures for interacting with computers: A comparative empirical study
    Robbe, S
    Carbonell, N
    Dauchy, P
    [J]. HUMAN-COMPUTER INTERACTION - INTERACT '97, 1997, : 445 - 452
  • [5] PointIt: A ROS Toolkit for Interacting with Co-located Robots using Pointing Gestures
    Abbate, Gabriele
    Giusti, Alessandro
    Paolillo, Antonio
    Gromov, Boris
    Gambardella, Luca
    Rizzoli, Andrea Emilio
    Guzzi, Jerome
    [J]. PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), 2022, : 608 - 612
  • [6] Control Architecture for Human Friendly Robots Based on Interacting with Human
    Masuta, Hiroyuki
    Hiwada, Eriko
    Kubota, Naoyuki
    [J]. INTELLIGENT ROBOTICS AND APPLICATIONS, PT II, 2011, 7102 : 210 - +
  • [7] Towards Facial Gestures Generation by Speech Signal Analysis Using HUGE Architecture
    Zoric, Goranka
    Smid, Karlo
    Pandzic, Igor S.
    [J]. MULTIMODAL SIGNAL: COGNITIVE AND ALGORITHMIC ISSUES, 2009, 5398 : 112 - +
  • [8] Pain gestures:: the orchestration of speech and body gestures
    Hydén, LC
    Peolsson, M
    [J]. HEALTH, 2002, 6 (03): : 325 - 345
  • [9] The perception of speech gestures
    Surprenant, AM
    Goldstein, L
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1998, 104 (01): : 518 - 529
  • [10] GESTURES AND THE DISAMBIGUATION OF SPEECH
    Gunter, Thomas
    [J]. PSYCHOPHYSIOLOGY, 2009, 46 : S26 - S26